<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.jcihosting.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=75.80.128.147</id>
	<title>JCWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.jcihosting.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=75.80.128.147"/>
	<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Special:Contributions/75.80.128.147"/>
	<updated>2026-04-21T02:02:03Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.3</generator>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Private_IP_Mapping&amp;diff=2246</id>
		<title>Private IP Mapping</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Private_IP_Mapping&amp;diff=2246"/>
		<updated>2020-12-09T19:24:42Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Public IPs in Texas */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= IP Mapping =&lt;br /&gt;
&lt;br /&gt;
== Public IP Mapping ==&lt;br /&gt;
=== Castle (Redit) ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
69.55.224.0/24&lt;br /&gt;
69.55.225.0/24&lt;br /&gt;
69.55.226.0/24&lt;br /&gt;
69.55.227.0/24&lt;br /&gt;
69.55.228.0/24&lt;br /&gt;
69.55.230.0/24&lt;br /&gt;
69.55.232.0/24&lt;br /&gt;
69.55.233.0/24&lt;br /&gt;
69.55.234.0/24&lt;br /&gt;
69.55.235.0/24&lt;br /&gt;
69.55.236.0/24&lt;br /&gt;
69.55.237.0/24&lt;br /&gt;
69.55.238.0/24&lt;br /&gt;
69.55.239.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== I2b ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
69.55.229.0/24&lt;br /&gt;
69.55.231.0/24&lt;br /&gt;
&lt;br /&gt;
65.50.255.0/24  I2b Gig Link&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Las Vegas ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
70.98.204.125-140&lt;br /&gt;
netmask:    255.255.255.0&lt;br /&gt;
gateway:    70.98.204.1&lt;br /&gt;
dns:        70.99.204.249&lt;br /&gt;
&lt;br /&gt;
private:    172.16.204.127&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Private IP Mapping ==&lt;br /&gt;
&lt;br /&gt;
= Castle = &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
10.1.4.1 = nat &lt;br /&gt;
10.1.4.3 = backup2&lt;br /&gt;
10.1.4.4 = sun/console&lt;br /&gt;
10.1.4.5 = mail&lt;br /&gt;
10.1.4.8 = backup1&lt;br /&gt;
10.1.4.10 = digi1&lt;br /&gt;
10.1.4.11 = digi2? &lt;br /&gt;
10.1.4.59 = virt9&lt;br /&gt;
10.1.4.60-10.1.4.69 = virt10 - virt19&lt;br /&gt;
10.1.4.90-10.1.4.91 = tempvirt1 - tempvirt2&lt;br /&gt;
10.1.4.101-10.1.4.119 = jail1 - jail19&lt;br /&gt;
10.1.4.151 = quar1&lt;br /&gt;
10.1.4.160 = switch-p1a&lt;br /&gt;
10.1.4.161 = switch-p1b&lt;br /&gt;
10.1.4.162 = &lt;br /&gt;
10.1.4.163 = switch-p3&lt;br /&gt;
10.1.4.164 = switch-p4&lt;br /&gt;
10.1.4.165 = &lt;br /&gt;
10.1.4.166 = switch-p6&lt;br /&gt;
10.1.4.167 = switch-p7&lt;br /&gt;
10.1.4.169 = switch-p9&lt;br /&gt;
10.1.4.160 = switch-p10&lt;br /&gt;
10.1.4.161 = switch-p11&lt;br /&gt;
10.1.4.162 = switch-p12&lt;br /&gt;
10.1.4.163 = switch-p13&lt;br /&gt;
10.1.4.164 = switch-p14&lt;br /&gt;
10.1.4.165 = switch-p15&lt;br /&gt;
10.1.4.166 = switch-p16&lt;br /&gt;
10.1.4.167 = switch-p17&lt;br /&gt;
10.1.4.168 = &lt;br /&gt;
10.1.4.169 = &lt;br /&gt;
10.1.4.170 = 3550&lt;br /&gt;
10.1.4.171 = 3750&lt;br /&gt;
10.1.4.172 = ASA&lt;br /&gt;
10.1.4.201 = mx1&lt;br /&gt;
10.1.4.202 = mx2&lt;br /&gt;
10.1.4.203 = bwdb&lt;br /&gt;
10.1.4.211 = DRAC virt11&lt;br /&gt;
10.1.4.212 = DRAC virt12&lt;br /&gt;
10.1.4.213 = DRAC virt13&lt;br /&gt;
10.1.4.217 = DRAC virt17&lt;br /&gt;
10.1.4.209 = DRAC virt19&lt;br /&gt;
10.1.4.223 = gateway&lt;br /&gt;
10.1.4.232 = DRAC jail2&lt;br /&gt;
10.1.4.234 = DRAC jail4&lt;br /&gt;
10.1.4.235 = RMM  jail5&lt;br /&gt;
10.1.4.237 = DRAC jail7&lt;br /&gt;
10.1.4.238 = DRAC jail8&lt;br /&gt;
10.1.4.239 = DRAC jail9&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= i2b =&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
10.1.2.1 = nat2&lt;br /&gt;
10.1.2.2 = firewall2&lt;br /&gt;
10.1.2.3 = backup3&lt;br /&gt;
10.1.2.4 = bwdb2&lt;br /&gt;
10.1.2.5 = firewall3&lt;br /&gt;
10.1.2.6 = backup4&lt;br /&gt;
10.1.2.10 = digi3&lt;br /&gt;
10.1.2.50 = switch-p20&lt;br /&gt;
10.1.2.51 = switch-p21&lt;br /&gt;
10.1.2.52 = switch-p22&lt;br /&gt;
10.1.2.53 = switch-p23&lt;br /&gt;
10.1.2.54 = switch-p24&lt;br /&gt;
10.1.2.55 = switch-p25&lt;br /&gt;
10.1.2.56 = switch-p26&lt;br /&gt;
10.1.2.57 = switch-p27&lt;br /&gt;
10.1.2.58 = switch-p28&lt;br /&gt;
10.1.2.60 = switch-p30&lt;br /&gt;
10.1.2.71 = ats-1&lt;br /&gt;
10.1.2.72 = ats-2&lt;br /&gt;
10.1.2.73 = ats-3&lt;br /&gt;
10.1.2.74 = ats-4&lt;br /&gt;
10.1.2.75 = ats-5&lt;br /&gt;
10.1.2.76 = ats-6&lt;br /&gt;
10.1.2.77 = ats-7&lt;br /&gt;
10.1.2.78 = ats-8&lt;br /&gt;
10.1.2.79 = ats-9&lt;br /&gt;
10.1.2.103 = jail3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Office =&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
10.1.6.1 = Motorola Gateway router&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Ground Water in Texas =&lt;br /&gt;
== Public IPs in Texas ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
72.165.211.73  Gateway&lt;br /&gt;
72.165.211.74  New Cisco 3750 Switch&lt;br /&gt;
72.165.211.75  Power Switch&lt;br /&gt;
72.165.211.76  NAT Box&lt;br /&gt;
72.165.211.77  Cisco 3750 Switch&lt;br /&gt;
72.165.211.78  New D-link Nat (old Digi addr) &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Private IPs in Texas ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
192.168.1.1   NAT Box&lt;br /&gt;
192.168.1.2   Cisco 3750&lt;br /&gt;
192.168.1.176 el1 Master Radio     nm 255.255.255.248  gw 192.168.1.182&lt;br /&gt;
192.168.1.177 el1 Slave Radio&lt;br /&gt;
192.168.1.178 el1 Data Logger&lt;br /&gt;
192.168.1.184/29 el2 Master Radio  nm 255.255.255.248  gw 192.168.1.190&lt;br /&gt;
192.168.1.185/29  el2 Slave Radio&lt;br /&gt;
192.168.1.186/29 el2 Data Logger&lt;br /&gt;
192.168.1.192 el3 Master Radio     nm 255.255.255.248  gw 192.168.1.198&lt;br /&gt;
192.168.1.193 el3 Slave Radio&lt;br /&gt;
192.168.1.194 el3 Data Logger&lt;br /&gt;
192.168.1.200/29 el4 Master Radio  nm 255.255.255.248  gw 192.168.1.206&lt;br /&gt;
192.168.1.201/29 el4 Slave Radio&lt;br /&gt;
192.168.1.202/29 el4 Data Logger&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Private_IP_Mapping&amp;diff=2245</id>
		<title>Private IP Mapping</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Private_IP_Mapping&amp;diff=2245"/>
		<updated>2020-12-09T19:19:46Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Public IPs in Texas */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= IP Mapping =&lt;br /&gt;
&lt;br /&gt;
== Public IP Mapping ==&lt;br /&gt;
=== Castle (Redit) ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
69.55.224.0/24&lt;br /&gt;
69.55.225.0/24&lt;br /&gt;
69.55.226.0/24&lt;br /&gt;
69.55.227.0/24&lt;br /&gt;
69.55.228.0/24&lt;br /&gt;
69.55.230.0/24&lt;br /&gt;
69.55.232.0/24&lt;br /&gt;
69.55.233.0/24&lt;br /&gt;
69.55.234.0/24&lt;br /&gt;
69.55.235.0/24&lt;br /&gt;
69.55.236.0/24&lt;br /&gt;
69.55.237.0/24&lt;br /&gt;
69.55.238.0/24&lt;br /&gt;
69.55.239.0/24&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== I2b ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
69.55.229.0/24&lt;br /&gt;
69.55.231.0/24&lt;br /&gt;
&lt;br /&gt;
65.50.255.0/24  I2b Gig Link&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Las Vegas ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
70.98.204.125-140&lt;br /&gt;
netmask:    255.255.255.0&lt;br /&gt;
gateway:    70.98.204.1&lt;br /&gt;
dns:        70.99.204.249&lt;br /&gt;
&lt;br /&gt;
private:    172.16.204.127&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Private IP Mapping ==&lt;br /&gt;
&lt;br /&gt;
= Castle = &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
10.1.4.1 = nat &lt;br /&gt;
10.1.4.3 = backup2&lt;br /&gt;
10.1.4.4 = sun/console&lt;br /&gt;
10.1.4.5 = mail&lt;br /&gt;
10.1.4.8 = backup1&lt;br /&gt;
10.1.4.10 = digi1&lt;br /&gt;
10.1.4.11 = digi2? &lt;br /&gt;
10.1.4.59 = virt9&lt;br /&gt;
10.1.4.60-10.1.4.69 = virt10 - virt19&lt;br /&gt;
10.1.4.90-10.1.4.91 = tempvirt1 - tempvirt2&lt;br /&gt;
10.1.4.101-10.1.4.119 = jail1 - jail19&lt;br /&gt;
10.1.4.151 = quar1&lt;br /&gt;
10.1.4.160 = switch-p1a&lt;br /&gt;
10.1.4.161 = switch-p1b&lt;br /&gt;
10.1.4.162 = &lt;br /&gt;
10.1.4.163 = switch-p3&lt;br /&gt;
10.1.4.164 = switch-p4&lt;br /&gt;
10.1.4.165 = &lt;br /&gt;
10.1.4.166 = switch-p6&lt;br /&gt;
10.1.4.167 = switch-p7&lt;br /&gt;
10.1.4.169 = switch-p9&lt;br /&gt;
10.1.4.160 = switch-p10&lt;br /&gt;
10.1.4.161 = switch-p11&lt;br /&gt;
10.1.4.162 = switch-p12&lt;br /&gt;
10.1.4.163 = switch-p13&lt;br /&gt;
10.1.4.164 = switch-p14&lt;br /&gt;
10.1.4.165 = switch-p15&lt;br /&gt;
10.1.4.166 = switch-p16&lt;br /&gt;
10.1.4.167 = switch-p17&lt;br /&gt;
10.1.4.168 = &lt;br /&gt;
10.1.4.169 = &lt;br /&gt;
10.1.4.170 = 3550&lt;br /&gt;
10.1.4.171 = 3750&lt;br /&gt;
10.1.4.172 = ASA&lt;br /&gt;
10.1.4.201 = mx1&lt;br /&gt;
10.1.4.202 = mx2&lt;br /&gt;
10.1.4.203 = bwdb&lt;br /&gt;
10.1.4.211 = DRAC virt11&lt;br /&gt;
10.1.4.212 = DRAC virt12&lt;br /&gt;
10.1.4.213 = DRAC virt13&lt;br /&gt;
10.1.4.217 = DRAC virt17&lt;br /&gt;
10.1.4.209 = DRAC virt19&lt;br /&gt;
10.1.4.223 = gateway&lt;br /&gt;
10.1.4.232 = DRAC jail2&lt;br /&gt;
10.1.4.234 = DRAC jail4&lt;br /&gt;
10.1.4.235 = RMM  jail5&lt;br /&gt;
10.1.4.237 = DRAC jail7&lt;br /&gt;
10.1.4.238 = DRAC jail8&lt;br /&gt;
10.1.4.239 = DRAC jail9&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= i2b =&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
10.1.2.1 = nat2&lt;br /&gt;
10.1.2.2 = firewall2&lt;br /&gt;
10.1.2.3 = backup3&lt;br /&gt;
10.1.2.4 = bwdb2&lt;br /&gt;
10.1.2.5 = firewall3&lt;br /&gt;
10.1.2.6 = backup4&lt;br /&gt;
10.1.2.10 = digi3&lt;br /&gt;
10.1.2.50 = switch-p20&lt;br /&gt;
10.1.2.51 = switch-p21&lt;br /&gt;
10.1.2.52 = switch-p22&lt;br /&gt;
10.1.2.53 = switch-p23&lt;br /&gt;
10.1.2.54 = switch-p24&lt;br /&gt;
10.1.2.55 = switch-p25&lt;br /&gt;
10.1.2.56 = switch-p26&lt;br /&gt;
10.1.2.57 = switch-p27&lt;br /&gt;
10.1.2.58 = switch-p28&lt;br /&gt;
10.1.2.60 = switch-p30&lt;br /&gt;
10.1.2.71 = ats-1&lt;br /&gt;
10.1.2.72 = ats-2&lt;br /&gt;
10.1.2.73 = ats-3&lt;br /&gt;
10.1.2.74 = ats-4&lt;br /&gt;
10.1.2.75 = ats-5&lt;br /&gt;
10.1.2.76 = ats-6&lt;br /&gt;
10.1.2.77 = ats-7&lt;br /&gt;
10.1.2.78 = ats-8&lt;br /&gt;
10.1.2.79 = ats-9&lt;br /&gt;
10.1.2.103 = jail3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Office =&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
10.1.6.1 = Motorola Gateway router&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Ground Water in Texas =&lt;br /&gt;
== Public IPs in Texas ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
72.165.211.73  Gateway&lt;br /&gt;
72.165.211.74  New D-link Nat (old Digi addr)&lt;br /&gt;
72.165.211.75  Power Switch&lt;br /&gt;
72.165.211.76  NAT Box&lt;br /&gt;
72.165.211.77  Cisco 3750 Switch&lt;br /&gt;
72.165.211.78  New Cisco 3750 Switch&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Private IPs in Texas ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
192.168.1.1   NAT Box&lt;br /&gt;
192.168.1.2   Cisco 3750&lt;br /&gt;
192.168.1.176 el1 Master Radio     nm 255.255.255.248  gw 192.168.1.182&lt;br /&gt;
192.168.1.177 el1 Slave Radio&lt;br /&gt;
192.168.1.178 el1 Data Logger&lt;br /&gt;
192.168.1.184/29 el2 Master Radio  nm 255.255.255.248  gw 192.168.1.190&lt;br /&gt;
192.168.1.185/29  el2 Slave Radio&lt;br /&gt;
192.168.1.186/29 el2 Data Logger&lt;br /&gt;
192.168.1.192 el3 Master Radio     nm 255.255.255.248  gw 192.168.1.198&lt;br /&gt;
192.168.1.193 el3 Slave Radio&lt;br /&gt;
192.168.1.194 el3 Data Logger&lt;br /&gt;
192.168.1.200/29 el4 Master Radio  nm 255.255.255.248  gw 192.168.1.206&lt;br /&gt;
192.168.1.201/29 el4 Slave Radio&lt;br /&gt;
192.168.1.202/29 el4 Data Logger&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=NetHere&amp;diff=2243</id>
		<title>NetHere</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=NetHere&amp;diff=2243"/>
		<updated>2020-09-01T23:49:42Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* nsrbl Disk Quota Warning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= NetHere =&lt;br /&gt;
&lt;br /&gt;
URL:  http://toolbox.nethere.com&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Access to servers is restricted to admin-1.nethere.net.  Access to admin-1.nethere.net is restricted to our office and nat.johncompanies.com.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
JCI Hosting acquired the web-hosting customers of NetHere. (Previously, NetHere had aquired customers from Simply Internet, and Z-net).&lt;br /&gt;
Customer billing/info are being imported from NetHere to JCI database &lt;br /&gt;
URL:  https://secure.johncompanies.com/mgmt/index.html&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Nethere has several tools for management.  These are restricted to only allow&lt;br /&gt;
access from the office, or a few admin&#039;s homes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  cacti [http://cacti.nethere.net]&lt;br /&gt;
  nagios[http://nagios.nethere.net]&lt;br /&gt;
  toolbox [http://toolbox.nethere.net]&lt;br /&gt;
  helpdesk [http://helpdesk.nethere.net]&lt;br /&gt;
  support FAQ [http://www.nethere.net/website/support]&lt;br /&gt;
  knowledge base [http://support.nethere.net/kb]&lt;br /&gt;
  phpmyadmin [https://phpmyadmin.nethere.net]&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
== Virtual Hosts ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
66.63.129.2 - New HP DL360&lt;br /&gt;
fpweb-2.nethere.net&lt;br /&gt;
unixweb-6.nethere.net&lt;br /&gt;
mta-db.mail.nethere.net&lt;br /&gt;
scan-1.mail.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.4- vmware9.eng&lt;br /&gt;
mysqldb-1.webhost.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.91&lt;br /&gt;
lists-1.nethere.net&lt;br /&gt;
sb-2.nethere.net&lt;br /&gt;
mx-1.nethere.net- spooling server&lt;br /&gt;
mailx-1.nethere.net- web server outbound mail server&lt;br /&gt;
&lt;br /&gt;
66.63.129.101 (c.host.nethere.net)&lt;br /&gt;
ahi.nethere.net- ldap&lt;br /&gt;
mailbox-4&lt;br /&gt;
mta-2.mail.nethere.net&lt;br /&gt;
relay-2.mail.nethere.net&lt;br /&gt;
unixweb-8.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.102  (vm2.eng.nethere.net)&lt;br /&gt;
koi.nethere.net- off- realserver&lt;br /&gt;
prov-1.nethere.net- cp.nethere.net, cp.znet.net, cp.simplyweb.net&lt;br /&gt;
scribe.nethere.net- prov.nethere.net, scribe.nethere.net&lt;br /&gt;
tetra.nethere.net- nethere.com, znet.com, simplyweb.com&lt;br /&gt;
web-2.inboxhq.net- helpdesk&lt;br /&gt;
webmail-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.103&lt;br /&gt;
andromeda.nethere.net (ns2.nethere.net)&lt;br /&gt;
nscache-2.nethere.net&lt;br /&gt;
nsrbl-2.nethere.net&lt;br /&gt;
mailbox-2.nethere.net&lt;br /&gt;
scan-2.mail.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.104&lt;br /&gt;
eel.nethere.net- ldap&lt;br /&gt;
mta-1.mail.nethere.net&lt;br /&gt;
nscache-1.nethere.net&lt;br /&gt;
nsrbl-1.nethere.net&lt;br /&gt;
phoenix.nethere.net (ns1.nethere.net)&lt;br /&gt;
relay-1.mail.nethere.net&lt;br /&gt;
unixweb-2.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.105&lt;br /&gt;
home-1.nethere.net&lt;br /&gt;
ntweb-11.nethere.net&lt;br /&gt;
ntweb-6.nethere.net&lt;br /&gt;
cart32.nethere.net&lt;br /&gt;
(web-3.schedulecafe.com)&lt;br /&gt;
&lt;br /&gt;
66.63.129.106                  6499MB/16381    18.32GB/268.25 free&lt;br /&gt;
admin-1.nethere.net&lt;br /&gt;
admin-2.nethere.net&lt;br /&gt;
pike.nethere.net- cacti, nagios- off&lt;br /&gt;
shark.nethere.net- ldap&lt;br /&gt;
ntdb-1.nethere.net&lt;br /&gt;
winrestore64&lt;br /&gt;
&lt;br /&gt;
207.167.93.106- vmware6.eng    4156MB/8185MB    45.94GB/267GB free&lt;br /&gt;
ntweb-2.nethere.net&lt;br /&gt;
unixweb-12.nethere.net (no web sites on server)&lt;br /&gt;
mailbox-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
207.167.93.108- vmware8.eng&lt;br /&gt;
mailx-2.nethere.net&lt;br /&gt;
mx-2.nethere.net&lt;br /&gt;
&lt;br /&gt;
207.167.93.110- vmware10.eng&lt;br /&gt;
unixweb-11.nethere.net&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hardware Hosts ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Cab 6-08&lt;br /&gt;
&lt;br /&gt;
siron-3 (off)&lt;br /&gt;
siron-2&lt;br /&gt;
siron-1&lt;br /&gt;
106&lt;br /&gt;
104&lt;br /&gt;
sndg-br-1&lt;br /&gt;
sndg-br-2&lt;br /&gt;
&lt;br /&gt;
mailbox-2.nethere.net  (off - virtualized)&lt;br /&gt;
mailbox-3.nethere.net  &lt;br /&gt;
mailbox-1.nethere.net  (off - virtualized)&lt;br /&gt;
&lt;br /&gt;
scan-1 (OFF)&lt;br /&gt;
&lt;br /&gt;
scan-2  &lt;br /&gt;
&lt;br /&gt;
129.2&lt;br /&gt;
102&lt;br /&gt;
101&lt;br /&gt;
&lt;br /&gt;
105&lt;br /&gt;
103&lt;br /&gt;
&lt;br /&gt;
Backup-2&lt;br /&gt;
Backup-1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Cab 5-02&lt;br /&gt;
&lt;br /&gt;
ntweb-3&lt;br /&gt;
unixweb-7&lt;br /&gt;
&lt;br /&gt;
unixweb-3&lt;br /&gt;
unixweb-2 (now virtualized)&lt;br /&gt;
unixweb-10&lt;br /&gt;
&lt;br /&gt;
ntdb-2 or 3?&lt;br /&gt;
&lt;br /&gt;
vmware6.eng&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Admin Access ==&lt;br /&gt;
&lt;br /&gt;
Access to the Admin Systems are protected by firewalls and&lt;br /&gt;
application filters based on IP addresses.&lt;br /&gt;
&lt;br /&gt;
=== Main Firewall ===&lt;br /&gt;
The Main firewall is on sndg-cr-1.  The main firewall is a Cisco 3750.&lt;br /&gt;
&lt;br /&gt;
==== IP Access List Editing ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. enable&lt;br /&gt;
&lt;br /&gt;
2. show ip access-lists access-list-name &lt;br /&gt;
&lt;br /&gt;
   show ip access-list BLOCKED&lt;br /&gt;
&lt;br /&gt;
3. configure terminal&lt;br /&gt;
&lt;br /&gt;
4. ip access-list resequence access-list-name starting-sequence-number increment&lt;br /&gt;
&lt;br /&gt;
5. ip access-list {standard | extended} access-list-name&lt;br /&gt;
    &lt;br /&gt;
   ip access-list extended BLOCKED &lt;br /&gt;
&lt;br /&gt;
6. sequence-number permit source source-wildcard&lt;br /&gt;
   sequence-number deny protocol source source-wildcard destination destination-wildcard [precedence precedence] [tos tos] [log] [time-range time-range-name] [fragments]&lt;br /&gt;
   no sequence-number &lt;br /&gt;
 &lt;br /&gt;
7. end&lt;br /&gt;
&lt;br /&gt;
8. show ip access-lists access-list-name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Host Firewall ===&lt;br /&gt;
On the servers, there is often a firewall such as ipfw, ipf, or pf.&lt;br /&gt;
&lt;br /&gt;
==== Toolbox ====&lt;br /&gt;
&lt;br /&gt;
From root@admin-1 connect to scribe &lt;br /&gt;
  ssh scribe&lt;br /&gt;
&lt;br /&gt;
Stop the scrolling of messages to the screen&lt;br /&gt;
  /etc/rc.d/syslogd stop&lt;br /&gt;
&lt;br /&gt;
Add the new IP address to the pf config.&lt;br /&gt;
  vi /etc/pf.conf&lt;br /&gt;
&lt;br /&gt;
Reload the firewall rules&lt;br /&gt;
  pfctl -F all -f /etc/pf.conf&lt;br /&gt;
&lt;br /&gt;
Add the new IP address to the apache config.&lt;br /&gt;
  cd /nethere/conf/apache/conf/vhosts&lt;br /&gt;
  co -l prov.nethere.net  scribe.nethere.net.common&lt;br /&gt;
  vi prov.nethere.net&lt;br /&gt;
  vi scribe.nethere.net.common&lt;br /&gt;
  ci -u prov.nethere.net  scribe.nethere.net.common&lt;br /&gt;
&lt;br /&gt;
Reload the apache config&lt;br /&gt;
  apachectl restart&lt;br /&gt;
&lt;br /&gt;
Restart the syslog daemon&lt;br /&gt;
    /etc/rc.d/syslogd start&lt;br /&gt;
&lt;br /&gt;
==== Nagios ====&lt;br /&gt;
&lt;br /&gt;
From root@admin-1 connect to pike &lt;br /&gt;
  ssh nagios&lt;br /&gt;
&lt;br /&gt;
Add the new IP address to the apache config.&lt;br /&gt;
  cd /nethere/conf/apache/conf/vhosts&lt;br /&gt;
  co -l nagios.nethere.net cacti.nethere.net mrtg.nethere.net&lt;br /&gt;
  vi nagios.nethere.net cacti.nethere.net mrtg.nethere.net&lt;br /&gt;
  ci -u nagios.nethere.net cact.nethere.net mrtg.nethere.net&lt;br /&gt;
&lt;br /&gt;
Reload the apache config&lt;br /&gt;
  apachectl restart&lt;br /&gt;
&lt;br /&gt;
=== Application Filter ===&lt;br /&gt;
In the applications, there is often a restriction in the host application&lt;br /&gt;
configuration file.&lt;br /&gt;
  vi /usr/local/etc/apache/conf/vhosts/&amp;lt;host&amp;gt;.conf&lt;br /&gt;
&lt;br /&gt;
Restart the application&lt;br /&gt;
  apachect restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== NetHere/zNET/simplyweb Cert replacement ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you wish to use the old CSR, Global Sign keeps the old&lt;br /&gt;
CSR, so all you have to replace is the Cert.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;To generate a new csr for each&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -nodes -newkey rsa:2048 -keyout wild.nethere.key   -out wild.nethere.csr&lt;br /&gt;
openssl req -nodes -newkey rsa:2048 -keyout wild.simplyweb.key -out wild.simplyweb.csr&lt;br /&gt;
openssl req -nodes -newkey rsa:2048 -keyout wild.znet.key      -out wild.znet.csr&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Get a wild card cert for each company&lt;br /&gt;
(nethere.net, simplyweb.net. znet.net)&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
  https://www.globalsign.com/en/&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Update certs, keys, and/or pem files on the following servers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
webmail-1.mail (webmail for nh, si, and zn)          /nethere/conf/apache/pki/ssl.crt&lt;br /&gt;
prov-1.mail (cp for nh, si, and zn)                  /nethere/conf/apache/pki/ssl.crt&lt;br /&gt;
mta-1 and mta-2 (smtp)                       admin-2:/dist/files/nhmta/nethere/conf/postfix/nh/ssl&lt;br /&gt;
                                                     /nethere/conf/postfix/nh/ssl&lt;br /&gt;
                                                     /nethere/conf/postfix/si/ssl&lt;br /&gt;
                                                     /nethere/conf/postfix/zn/ssl&lt;br /&gt;
mailbox-1, mailbox-2, and mailbox-3 (pop3 and imap) (courier-imap)&lt;br /&gt;
                                                     /nethere/conf/courier-imap/nh&lt;br /&gt;
                                                     /nethere/conf/courier-imap/si&lt;br /&gt;
                                                     /nethere/conf/courier-imap/zn&lt;br /&gt;
              (smtp)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Provisioning ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Webservers ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OS: FreeBSD 4.11-RELEASE-p10 #23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== General ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
unixweb-1 through 8&lt;br /&gt;
  &lt;br /&gt;
All have the same configuration except for the following exceptions:&lt;br /&gt;
&lt;br /&gt;
unixweb-4: ZN Front Page server (depreciated)&lt;br /&gt;
unixweb-5: ZN Front Page, Miva Merchant server&lt;br /&gt;
unixweb-6: SI shared counter server (for SI sites): /www/lucy.inetworld.net/htdocs/cgi-bin/Count.cgi &lt;br /&gt;
&lt;br /&gt;
Server Software Installed&lt;br /&gt;
&lt;br /&gt;
Apache&lt;br /&gt;
	# httpd -v&lt;br /&gt;
	Server version: Apache/2.0.55&lt;br /&gt;
	Server built:   Apr  5 2006 17:04:01&lt;br /&gt;
PHP&lt;br /&gt;
	# /usr/local/bin/php4 -v&lt;br /&gt;
	PHP 4.4.2 (cgi-fcgi) (built: Mar  2 2006 09:31:57)&lt;br /&gt;
&lt;br /&gt;
Miva Merchant (unixweb-5)&lt;br /&gt;
MySQL&lt;br /&gt;
	# mysqladmin version -u root -p&lt;br /&gt;
	Server version          4.0.25&lt;br /&gt;
&lt;br /&gt;
Apache configurations are in the following directories:&lt;br /&gt;
&lt;br /&gt;
/nethere/conf/apache/conf/nvhosts  (name based hosting); naming convention is www.domain.tld&lt;br /&gt;
/nethere/conf/apache/conf/vhosts (ip based,SSL); naming convention is www.domain.tld (IP information) and www.domain.tld.common (general site information)&lt;br /&gt;
&lt;br /&gt;
NetHere specific scripts are located in:&lt;br /&gt;
/nethere/sbin&lt;br /&gt;
&lt;br /&gt;
Checking Server/site status via mod_status apache module:&lt;br /&gt;
&lt;br /&gt;
# apachectl stop&lt;br /&gt;
# /usr/local/etc/rc.d/httpd.sh start-status&lt;br /&gt;
&lt;br /&gt;
URL: http://unixweb-#/status (replace # with the unixeweb number)&lt;br /&gt;
** NOTE: After viewing status, do the following, otherwise other users can also view status by using .htaccess.&lt;br /&gt;
# apachectl stop&lt;br /&gt;
# apachectl startssl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Provisioning ====&lt;br /&gt;
Provisioning new sites:&lt;br /&gt;
&lt;br /&gt;
*** All website provisioning is done via NH script (webadd) on sawfish (admin-1.nethere.net) ***&lt;br /&gt;
# /nethere/sbin/webadd -h for usage&lt;br /&gt;
&lt;br /&gt;
Note: We&#039;ve discontinued new website provisioning on all servers except for unixweb-6, unless otherwise noted for domain&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== FTP Password Adjustment ====&lt;br /&gt;
&lt;br /&gt;
===== Unix Server =====&lt;br /&gt;
To change an FTP password:&lt;br /&gt;
&lt;br /&gt;
On the customers server as root do&lt;br /&gt;
&lt;br /&gt;
 passwd &amp;lt;username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(The password prompt will not echo entry)&lt;br /&gt;
&lt;br /&gt;
===== Windows Server =====&lt;br /&gt;
&lt;br /&gt;
To change an FTP password:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Find the users login (ie aa5014) in the provisioning tool.&lt;br /&gt;
Use Remote Desktop to get into the server.&lt;br /&gt;
On Desktop find FTP application Icon.&lt;br /&gt;
Use that tool to update the user&#039;s password.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Domain Aliasing ====&lt;br /&gt;
&lt;br /&gt;
For domain aliasing:&lt;br /&gt;
&lt;br /&gt;
1) Edit the /named/named.master on phoenix (ns1.nethere.net)&lt;br /&gt;
  a) Checkout the file; open with vi&lt;br /&gt;
     # co -l named.master&lt;br /&gt;
     # vi named.master&lt;br /&gt;
  b) find the domain that&#039;s serving as the master domain, then, following the general format of the file to add the domain aliases under the master domain entry, using the master domain zone file&lt;br /&gt;
     i.e.  &lt;br /&gt;
     --cut--&lt;br /&gt;
     zone &amp;quot;domain.tld&amp;quot; { type master; file &amp;quot;master/domain.tld&amp;quot;; };&lt;br /&gt;
     zone &amp;quot;aliased_domain.tld&amp;quot; { type master; file &amp;quot;master/master_domain.tld&amp;quot;; }; &lt;br /&gt;
     --cut--&lt;br /&gt;
  c) save the file, then check-in:&lt;br /&gt;
     # ci -u named.master &amp;lt; /dev/null&lt;br /&gt;
  d) update the zone records, using the makefile in /named; check for errors&lt;br /&gt;
     # make new-zone&lt;br /&gt;
     # tail /var/log/named&lt;br /&gt;
  **Note: some domains are IP based virtually hosted (i.e. SSL certificate). This requires a modification to the standard aliasing procedure. If this is the case, do the following&lt;br /&gt;
  e) Create a new zone file called &amp;quot;domain.tld-alias&amp;quot; in /named/master, copy current domain.tld zone file to the domain.tld-alias file, and adjust the &amp;quot;website&amp;quot; records, removing whatever IP based information is there, and replacing with the appropriate $INCLUDE name-based host template.&lt;br /&gt;
     e.g.&lt;br /&gt;
     tropicalshade.net:&lt;br /&gt;
     --cut--&lt;br /&gt;
     ;; ntweb-4&lt;br /&gt;
     @               IN      A       66.63.136.4&lt;br /&gt;
     www             IN      A       66.63.136.4&lt;br /&gt;
     --cut--&lt;br /&gt;
     tropicalshade.net-alias&lt;br /&gt;
     --cut--&lt;br /&gt;
     ;; website&lt;br /&gt;
     $INCLUDE master/nvhost.ntweb-4&lt;br /&gt;
     --cut--&lt;br /&gt;
  f) Adjust the named.master zone file in /named accordingly:&lt;br /&gt;
     i.e.  &lt;br /&gt;
     --cut--&lt;br /&gt;
     zone &amp;quot;aliased_domain.tld&amp;quot; { type master; file &amp;quot;master/master_domain.tld-alias&amp;quot;; }; &lt;br /&gt;
     --cut--&lt;br /&gt;
&lt;br /&gt;
    &lt;br /&gt;
2) Edit the Apache configuration for the domain on the web server the master domain is hosted on&lt;br /&gt;
   a) Script is /nethere/sbin/webalias&lt;br /&gt;
      # /nethere/sbin/webalias -h (for usage)&lt;br /&gt;
      e.g. # /nethere/sbin/webalias -d www.originaldomain.tld -a www.domainalias.tld&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== CGI Scripting ====&lt;br /&gt;
&lt;br /&gt;
For security, we&#039;ve implemented suexec on all sites.&lt;br /&gt;
All CGI *must* be placed in the cgi-bin/ (and/or cgibin/ directory for unixweb-4, 5) directory&lt;br /&gt;
&lt;br /&gt;
CGI permissions break down as follows:&lt;br /&gt;
&lt;br /&gt;
Ownership: user=username, group=webuser&lt;br /&gt;
Permission: cgi-bin/ = 755; files = 755&lt;br /&gt;
&lt;br /&gt;
suexec errors can be found here:&lt;br /&gt;
/www/default/logs/suexec_log&lt;br /&gt;
&lt;br /&gt;
cgi errors for a domain can be found in the main apache error logs for the domain:&lt;br /&gt;
/www/www.domain.tld/logs/error_log&lt;br /&gt;
&lt;br /&gt;
More information on CGI scripting can be found here:&lt;br /&gt;
http://httpd.apache.org/docs/1.3/howto/cgi.html&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Formmail Provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Formmail is provisioned via NH script (newformmail) on server domain is hosted on.&lt;br /&gt;
# /nethere/sbin/newformmail -h for usage&lt;br /&gt;
&lt;br /&gt;
Usage: newformmail [-h] [-d domain] [-i ip_address] [-e &amp;quot;email1 email2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Disabling a site ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To disable/enable a website, there is a script (webstatus) located on each webserver:&lt;br /&gt;
/nethere/sbin/webstatus -h for usage&lt;br /&gt;
&lt;br /&gt;
1) Disable: &lt;br /&gt;
   # /nethere/sbin/webstatus -r -d -s www.domain.tld&lt;br /&gt;
&lt;br /&gt;
2) Enable:&lt;br /&gt;
   # /nethere/sbin/webstatus -r -e -s www.domain.tld&lt;br /&gt;
&lt;br /&gt;
==== Stats Provisioning ====&lt;br /&gt;
&lt;br /&gt;
We utilize webalizer for statistics.&lt;br /&gt;
&lt;br /&gt;
Stats are provisioned via NH scripts on server domain is hosted on.&lt;br /&gt;
&lt;br /&gt;
1) Provision stats for the domain (webaddstats_unix):&lt;br /&gt;
# /nethere/sbin/webaddstats_unix -d www.domain.tld&lt;br /&gt;
&lt;br /&gt;
2) Run stats for the domain (runstats_unix):&lt;br /&gt;
# /nethere/sbin/runstats_unix www.domain.tld&lt;br /&gt;
&lt;br /&gt;
More information on Webalizer can be found here:&lt;br /&gt;
http://www.mrunix.net/webalizer&lt;br /&gt;
README: ftp://ftp.mrunix.net/pub/webalizer/README&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Removing a site ====&lt;br /&gt;
&lt;br /&gt;
1) Site removal is primarily done via a NH script (webdel) on the webserver the domain is hosted on:&lt;br /&gt;
   # /nethere/sbin/webdel -h for usage&lt;br /&gt;
   # /nethere/sbin/webdel -r -d www.tldomain.com &lt;br /&gt;
&lt;br /&gt;
2) Update DNS accordingly (ns1.nethere.net)&lt;br /&gt;
   a) Checkout /named/named.master&lt;br /&gt;
      # co -l named.master&lt;br /&gt;
   b) Remove line containing domain&lt;br /&gt;
   c) Checkin /named/named.master&lt;br /&gt;
      # ci -u named.master &amp;lt; /dev/null&lt;br /&gt;
   d) Update zones&lt;br /&gt;
      # make new-zone&lt;br /&gt;
   e) Move zone files from /named/master to /named/archive&lt;br /&gt;
      # mv domain&lt;br /&gt;
&lt;br /&gt;
==== starter removal ==== &lt;br /&gt;
&lt;br /&gt;
All starter sites on home-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
1) Removal primarily done via a NH script (webdel_home) on home-1&lt;br /&gt;
   # nethere/sbin/webdel_home -h for usage&lt;br /&gt;
   i.e. webdel_home -p zn -s startername  &amp;lt;== for removing znet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Disk Quota increases ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Quotas are increased by using the &amp;quot;edquota&amp;quot; command:&lt;br /&gt;
# edquota username&lt;br /&gt;
Note that quotas are in KB, so a conversion is necessary. Also note that the &amp;quot;soft&amp;quot; quota is 1 MB less than the &amp;quot;hard&amp;quot; quota.&lt;br /&gt;
&lt;br /&gt;
The calculations are as follows:&lt;br /&gt;
&amp;quot;soft&amp;quot; =  ( quota(in MB) - 1 ) x 1024&lt;br /&gt;
&amp;quot;hard&amp;quot; =  quota(in MB) x 1024&lt;br /&gt;
e.g. for a 500 MB quota:&lt;br /&gt;
soft =&amp;gt; (500 - 1) x 1024 = 510976&lt;br /&gt;
hard =&amp;gt; 500 x 1024 = 512000&lt;br /&gt;
&lt;br /&gt;
To check a quota:&lt;br /&gt;
# quota -v username&lt;br /&gt;
&lt;br /&gt;
You can also use the prototype users:&lt;br /&gt;
&lt;br /&gt;
# edquota -p quota100 username  (100 MB)&lt;br /&gt;
# edquota -p quota500 username  (500 MB)&lt;br /&gt;
# edquota -p quota1000 username  (1000 MB)&lt;br /&gt;
&lt;br /&gt;
==== Granting of shell  - Enterprise packages ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shell is granted by the &amp;quot;chsh&amp;quot; command. By default, C shell (csh) is used:&lt;br /&gt;
# chsh -s shell_needed username&lt;br /&gt;
e.g. # chsh -s csh username&lt;br /&gt;
&lt;br /&gt;
*Note: customers must provide us with the static IP address(es) they will be connecting from. The IP address(es) must be added to the configuration on admin-1.&lt;br /&gt;
1) Modify the corresponding rules files on admin-1:&lt;br /&gt;
/dist/files/nhweb/etc/ipf.rules.fxp0 , .em0 (RCS controlled)&lt;br /&gt;
2) Push updates to servers&lt;br /&gt;
# cd /dist/rdist&lt;br /&gt;
# gmake nhweb&lt;br /&gt;
# gmake nhweb-update&lt;br /&gt;
3) Reload the rules on the corresponding server that needs access to&lt;br /&gt;
# ipf -Fa -f /etc/ipf.rules&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== SSL certificate installs ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) Whois the site for information on Registrant, email -  to be used for SSL generation&lt;br /&gt;
# whois domain.tld&lt;br /&gt;
&lt;br /&gt;
2) Create SSL self signed certificate, get CSR for customer to sign&lt;br /&gt;
  a) SSL generation is done by script: /nethere/conf/apache/pki/newsslcert.sh &lt;br /&gt;
  b) Copy CSR for customer&lt;br /&gt;
&lt;br /&gt;
3) Create/update Apache configuration files via NH script (webadd_ssl)&lt;br /&gt;
  # /nethere/sbin/webadd_ssl -h for usage&lt;br /&gt;
  a) webadd_ssl [-h] [-d domain] [-s ssl_domain] [-n]&lt;br /&gt;
  # webadd_ssl -d www.domain.tld -s www.domain.tld &lt;br /&gt;
 &lt;br /&gt;
4) Update DNS zone with new IP address - done on ns1.nethere.net&lt;br /&gt;
  a) Check out DNS zone file in /named/master&lt;br /&gt;
     # co -l domain.tld&lt;br /&gt;
  b) Adjust A records:&lt;br /&gt;
   --cut--&lt;br /&gt;
   ;; unixweb-##  &amp;lt;&amp;lt; enter the unixweb server number for ease of ID&lt;br /&gt;
   @		IN	A	vhost_ip_address&lt;br /&gt;
   www	IN	A	vhost_ip_address&lt;br /&gt;
   --cut--	&lt;br /&gt;
  c) Check in DNS zone file&lt;br /&gt;
     # ci -u domain.tld &amp;lt; /dev/null&lt;br /&gt;
  d) Reload zone file&lt;br /&gt;
     # rndc reload domain.tld&lt;br /&gt;
  e) check /var/log/named for errors&lt;br /&gt;
&lt;br /&gt;
5) Update DNS PTR record for IP address - done on ns1.nethere.net&lt;br /&gt;
  Note: Assuming IP address a.b.c.d&lt;br /&gt;
  a) Check out in-addr.arpa zone for IP address in /named/master&lt;br /&gt;
     # co -l a.b.c &lt;br /&gt;
  b) Follow format for PTR records   --cut--&lt;br /&gt;
   d	IN	PTR	www.domain.tld.&lt;br /&gt;
   --cut--&lt;br /&gt;
  c) Check in PTR zone file     # ci -u a.b.c &amp;lt; /dev/null&lt;br /&gt;
  d) Reload PTR zone &lt;br /&gt;
     # rndc reload c.b.a.in-addr.arpa&lt;br /&gt;
  e) check /var/log/named for errors&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Updating SSL certificate ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) cd to  /nethere/conf/apache/pki/ssl.crt on server site is hosted on&lt;br /&gt;
   &lt;br /&gt;
2) Check out (RCS) www.domain.tld.crt file&lt;br /&gt;
   # co -l www.domain.tld.crt&lt;br /&gt;
&lt;br /&gt;
3) Edit the file, remove old certificate, paste in new certificate&lt;br /&gt;
&lt;br /&gt;
4) Check in (RCS) the www.domain.tld.crt file&lt;br /&gt;
   # ci -u www.domain.tld.crt &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
5) Check, restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl stop&lt;br /&gt;
   # apachectl startssl&lt;br /&gt;
&lt;br /&gt;
6) Verify httpd started: &lt;br /&gt;
   # ps auxw | grep httpd&lt;br /&gt;
  a) If no processes, will need to revert back to old SSL cert, and restart apache. Check logs for errors&lt;br /&gt;
     # view /www/default/logs/ssl_engine.log&lt;br /&gt;
  b) Check for &amp;quot;Unable to configure RSA server private key&amp;quot; and &amp;quot;key values mismatch&amp;quot; entries - this means a bad SSL certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Password protection ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Http (simple) password protection is governed by the Apache configuration for the domain&lt;br /&gt;
&lt;br /&gt;
1) Create userdb, users file in the domain root directory (/www/www.domain.tld):&lt;br /&gt;
   # mkdir userdb&lt;br /&gt;
   # cd userdb&lt;br /&gt;
   # htpasswd -bc users username password&lt;br /&gt;
&lt;br /&gt;
2) Check out Apache config for www.domain.tld in /nethere/conf/apache/conf/&amp;lt;vhosts,nvhosts&amp;gt;&lt;br /&gt;
   # co -l www.domain.tld(.common)  &lt;br /&gt;
&lt;br /&gt;
3) Edit Apache configuration, add the following lines within the VirtualHost container  &lt;br /&gt;
--cut--&lt;br /&gt;
&amp;lt;Directory &amp;quot;/www/www.domain.tld/dir_to_be_protected&amp;quot;&amp;gt;&lt;br /&gt;
    AuthType Basic&lt;br /&gt;
    AuthName &amp;quot;www.domain.tld/dir_to_be_protected authentication&amp;quot;&lt;br /&gt;
    AuthUserFile /www/www.domain.tld/userdb/users &lt;br /&gt;
    &amp;lt;Limit GET POST&amp;gt;&lt;br /&gt;
      require valid-user&lt;br /&gt;
    &amp;lt;/Limit&amp;gt;&lt;br /&gt;
  &amp;lt;/Directory&amp;gt;&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
4) Check in Apache config&lt;br /&gt;
   # ci -u www.domain.tld(.common) &amp;lt; /dev/null  &lt;br /&gt;
&lt;br /&gt;
5) Restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl restart&lt;br /&gt;
&lt;br /&gt;
More on http (simple) password protection can be found here:&lt;br /&gt;
http://httpd.apache.org/docs/1.3/howto/auth.html#basic&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== .htaccess ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Used if customers want control of certain Apache directives (i.e. Authentication, etc.)&lt;br /&gt;
&lt;br /&gt;
1) Check out Apache config for www.domain.tld in /nethere/conf/apache/conf/&amp;lt;vhosts,nvhosts&amp;gt;&lt;br /&gt;
   # co -l www.domain.tld(.common)  &lt;br /&gt;
&lt;br /&gt;
2) Add the AllowOverride directive in the &amp;lt;Directory&amp;gt; section, under the PHP FCGIWrapper&lt;br /&gt;
   i.e.&lt;br /&gt;
  &amp;lt;Directory &amp;quot;/www/www.domain.tld/htdocs&amp;quot;&amp;gt;&lt;br /&gt;
    FCGIWrapper /www/www.domain.tld/htdocs/cgi-bin/php4 .php&lt;br /&gt;
    AllowOverride AuthConfig FileInfo Indexes Limit&lt;br /&gt;
  &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3) Check in Apache config&lt;br /&gt;
   # ci -u www.domain.tld(.common) &amp;lt; /dev/null  &lt;br /&gt;
&lt;br /&gt;
4) Restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More info on Allow Override can be found here: http://httpd.apache.org/docs/1.3/mod/core.html#allowoverride&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Domain re-provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## NOTE: Following is for domain being provisioned on same server (i.e. just being renamed)&lt;br /&gt;
&lt;br /&gt;
1) Adjust DNS - ns1.nethere.net&lt;br /&gt;
  a) Rename the DNS zone file to the new domain, remove the old DNS zone files from /named/master and /named/master/RCS&lt;br /&gt;
  b) Check in the new DNS zone file&lt;br /&gt;
  c) Edit /named/named.master - replace the old domain with the new one, alias as necessary&lt;br /&gt;
  d) Reload DNS zones - make new-zone in /named&lt;br /&gt;
&lt;br /&gt;
2) Adjust Apache config - server domain is hosted on&lt;br /&gt;
  a) Find current config file(s) in /nethere/conf/apache/conf/&amp;lt;vhosts,nvhosts&amp;gt;&lt;br /&gt;
  b) Replace old domain name entries with new domain name, alias as necessary&lt;br /&gt;
  c) Save as new_domain.tld&lt;br /&gt;
  d) Remove old_domain.tld(.common), RCS/old_domain.tld(.common)&lt;br /&gt;
  e) Check in new_domain.tld via RCS:&lt;br /&gt;
	# ci -u new_domain.tld &amp;lt; /dev/null&lt;br /&gt;
  f) Check out Apache include configuration file via RCS /nethere/conf/apache/conf/&amp;lt;nvhosts.conf,vhosts.conf&amp;gt; &lt;br /&gt;
  g) Edit nvhosts.conf or vhosts.conf, replace old_domain.tld entries with new_domain.tld&lt;br /&gt;
  h) Check in via RCS /nethere/conf/apche/conf/&amp;lt;nvhosts.conf,vhosts.conf&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
3) Rename directory for new domain&lt;br /&gt;
   # cd /www&lt;br /&gt;
   # mv www.old_domain.tld www.new_domain.tld&lt;br /&gt;
&lt;br /&gt;
4) Adjust PHP stub files/configuration&lt;br /&gt;
  a) Adjust php.ini file in /www/www.domain.tld/(php4,php5)&lt;br /&gt;
  b) Adjust PHP stub files in /www/www.domain.tld/htdocs/cgi-bin/(php4,php5) &lt;br /&gt;
  *note: need to chflags to &amp;quot;noschg&amp;quot; for /www/www.domain.tld/htdocs/cgi-bin/(php4,php5) before being able to update the stub files&lt;br /&gt;
 	 after adjusting, be sure to chflags schg /www/www.domain.tld/htdocs/cgi-bin/(php4,php5)&lt;br /&gt;
&lt;br /&gt;
5) Edit password file (vipw), replace old_domain.tld entries with new_domain.tld&lt;br /&gt;
   # vipw&lt;br /&gt;
   &lt;br /&gt;
6) Restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl restart&lt;br /&gt;
&lt;br /&gt;
## NOTE: Following is for domain being re-provisioned on new server&lt;br /&gt;
&lt;br /&gt;
1) Run NH script &amp;quot;webadd&amp;quot; on sawfish to provision domain on new server, *DO NOT* reload DNS, use same user/pass as before.&lt;br /&gt;
2) After customer has uploaded site to new server and gives the ok do the following:&lt;br /&gt;
  a) Update DNS zone for domain.tld; reload zone for domain.tld&lt;br /&gt;
  b) Wait 48 hours, then remove the site off the old server&lt;br /&gt;
     # /nethere/sbin/webdel&lt;br /&gt;
&lt;br /&gt;
==== FTP space provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generally FTP sites are provisioned on the server that hosts the main website. In the case of NT based FTP sites, we usually provision them on the server with the most space available.&lt;br /&gt;
&lt;br /&gt;
1) Provision site on server&lt;br /&gt;
  a) Done via NH script (webadd_ftp): webadd_ftp [-h] [-d domain] [-u username] [-p password] [-n]&lt;br /&gt;
     # /nethere/sbin/webadd_ftp -h  &amp;lt;- for usage&lt;br /&gt;
     EX: for ftp.domain.tld:&lt;br /&gt;
     # /nethere/sbin/webadd_ftp -d ftp.domain.tld -u username -p password&lt;br /&gt;
     (leaving off the -n will restart the proftpd process)&lt;br /&gt;
  b) Note: The host IP address will be given when the provisioning is completed, use that ip (a.b.c.d) for DNS entries&lt;br /&gt;
  c) Note: If this is an existing customer on the server, you&#039;ll need to increase the quota manually by 100 MB for the customer, see the section on quota increases for more info.&lt;br /&gt;
&lt;br /&gt;
2) Adjust DNS for domain on ns1.nethere.net&lt;br /&gt;
  a) Checkout zone for domain&lt;br /&gt;
  b) Add ftp host entry for domain, adjust serial&lt;br /&gt;
     EX for domain.tld:&lt;br /&gt;
--cut--&lt;br /&gt;
ftp	IN	A	a.b.c.d&lt;br /&gt;
--cut--&lt;br /&gt;
  c) Check in zone for domain&lt;br /&gt;
  d) Reload zone&lt;br /&gt;
     # rndc reload domain.tld&lt;br /&gt;
3) Adjust PTR record for domain, adjust serial&lt;br /&gt;
  a) Checkout zone for a.b.c&lt;br /&gt;
  b) Add record for domnain:&lt;br /&gt;
--cut--&lt;br /&gt;
d	IN	PTR 	ftp.domain.tld.&lt;br /&gt;
--cut--&lt;br /&gt;
  c) Check in zone for a.b.c&lt;br /&gt;
  d) Reload zone&lt;br /&gt;
     # rndc reload c.b.a.in-addr.arpa&lt;br /&gt;
4) Check for DNS errors&lt;br /&gt;
  a) tail /var/log/namedb&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== SiteBuilder provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unixweb-7.nethere.net&lt;br /&gt;
&lt;br /&gt;
*Must re-provision site on unixweb-7.nethere.net (if not already done)&lt;br /&gt;
&lt;br /&gt;
1) Log in to SB admin:&lt;br /&gt;
   http://sitebuilder.nethere.net/admin&lt;br /&gt;
  a) username: root&lt;br /&gt;
&lt;br /&gt;
2) Add site to SB config&lt;br /&gt;
  a) Click on Site Management -&amp;gt; Add regular&lt;br /&gt;
    * Alias is website username: i.e. aa####&lt;br /&gt;
    * Check the &amp;quot;Active&amp;quot; box  &lt;br /&gt;
    * Plan is &amp;quot;BasePlan&amp;quot;  &lt;br /&gt;
    * Password same as website&lt;br /&gt;
  b) Click on &amp;quot;Publish Properties&lt;br /&gt;
    * Check &amp;quot;Allow publishing&amp;quot;&lt;br /&gt;
    * Site host name: www.domain.tld&lt;br /&gt;
    * FTP host: unixweb-7.nethere.net&lt;br /&gt;
    * FTP login/password: same as site user/pass&lt;br /&gt;
    * FTP working directory: leave blank&lt;br /&gt;
  c) Click Apply&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====  Name Servers            ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OS: FreeBSD&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== General ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ns1.nethere.net - Primary name servers for DNS zone records&lt;br /&gt;
 /named - contains the files that have all DNS domain zone entries (named.master, named.slave, named.acl) plus Makefile for distributing DNS records&lt;br /&gt;
 /named/master - contains all the domain zone files for which we are authorative for, as well as IP address (PTR records)&lt;br /&gt;
&lt;br /&gt;
ns2.nethere.net - Secondary (slave) name server for DNS zone records&lt;br /&gt;
&lt;br /&gt;
nsrbl-1.nethere.net - RBL (Realtime Blackhole List) DNS server&lt;br /&gt;
 /named/rbldns/cache - contains the files for domains that we specifically allow or deny&lt;br /&gt;
&lt;br /&gt;
nscache-1,2 - caching name servers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Adding DNS website entries ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) Create a DNS zone file for domain.tld via NH script (zoneadd_vhost)&lt;br /&gt;
   # /nethere/sbin/zoneadd_vhost -h  (for usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Adding DNS IP entries ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) Create the forward and reverse DNS records via NH script (zone_generate) for a netblock, will create /tmp/customer.forward and /tmp/customer.reverse files to be read&lt;br /&gt;
  a) /nethere/sbin/zone_generate -h for usage&lt;br /&gt;
    i.e. for netblock a.b.c, starting IP d, ending IP z&lt;br /&gt;
     # zone_generate -n a.b.c -b d -e z -p customer&lt;br /&gt;
&lt;br /&gt;
2) Checkout, edit the IP in-addr.arpa zone file, reload the zone&lt;br /&gt;
  a) # co -l a.b.c&lt;br /&gt;
  b) Seach for the nearest netblock area for the domain, follow format for customer info, read in the /tmp/abbrev.rdns file accordingly, increase Serial for zone in YYYYMMDD## format&lt;br /&gt;
     e.g.&lt;br /&gt;
--cut--&lt;br /&gt;
;;;;&lt;br /&gt;
;; 66.63.152.232/30 (255.255.255.252)&lt;br /&gt;
;; Description: First Choice Home Improvement&lt;br /&gt;
;; Contact: Shannon Hill &amp;lt;firstchoicehi@hotmail.com&amp;gt;, (858) 277-5351&lt;br /&gt;
;; Location: AR-1, Serial3/0/18:0&lt;br /&gt;
;;;;&lt;br /&gt;
232     IN      PTR     firstchoice-net.access.nethere.net.&lt;br /&gt;
233     IN      PTR     firstchoice-gw.access.nethere.net.&lt;br /&gt;
234     IN      PTR     firstchoice-2.access.nethere.net.&lt;br /&gt;
235     IN      PTR     firstchoice-bcast.access.nethere.net.&lt;br /&gt;
--cut--&lt;br /&gt;
  c) # ci -u a.b.c &amp;lt; /dev/null&lt;br /&gt;
  d) # rndc reload c.b.a.in-addr.arpa&lt;br /&gt;
  e) verify loading of zone: # tail /var/log/named&lt;br /&gt;
3) Checkout, edit the forward DNS zone file, reload the zone&lt;br /&gt;
  a) # co -l access.nethere.net&lt;br /&gt;
  b) Search for the nearest neblock area for the domain, follow format for customer, read in the /tmp/abbrev.fdns file accordingly, increase Serial for zone in YYYYMMDD## format&lt;br /&gt;
     e.g.&lt;br /&gt;
--cut--&lt;br /&gt;
;; 66.63.152.232/30 (255.255.255.252)&lt;br /&gt;
firstchoice-net 	IN      A       66.63.152.232&lt;br /&gt;
firstchoice-gw  	IN      A       66.63.152.233&lt;br /&gt;
firstchoice-2   	IN      A       66.63.152.234&lt;br /&gt;
firstchoice-bcast IN      A       66.63.152.235&lt;br /&gt;
--cut--&lt;br /&gt;
  c) # ci -u access.nethere.net &amp;lt; /dev/null&lt;br /&gt;
  d) # rndc reload access.nethere.net &lt;br /&gt;
  e) verify loading of zone: # tail /var/log/named&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Unblocking RBLd IP addresses ====&lt;br /&gt;
&lt;br /&gt;
There are two primary reasons why we add customers to the allow relay list:&lt;br /&gt;
a) They have a static IP address (i.e. DSL, T1) and wish to use our mail servers&lt;br /&gt;
b) They&#039;ve been blocked by one of our subscribed blackhole lists, however, have patched their machine and are no longer open to relay.&lt;br /&gt;
&lt;br /&gt;
To do this on nsrbl-1.nethere.net:&lt;br /&gt;
&lt;br /&gt;
1) Checkout the allow.relays.nethere.net file located in /named/rbldns/cache&lt;br /&gt;
&lt;br /&gt;
2) Edit the file, and add the IP address in the following format:&lt;br /&gt;
--cut--&lt;br /&gt;
a.b.c.d	YYYYMMDD	hostname reason for listing&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
You can also add subnets via &#039;/&#039; notation for relay&lt;br /&gt;
--cut--&lt;br /&gt;
a.b.c.d/28	YYYYMMDD	hostname reason for listing&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
3) Check in the file&lt;br /&gt;
&lt;br /&gt;
4) Updates to the rbldns zone are done automatically on the hour, every hour, so no need to do anything else. &lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
&lt;br /&gt;
For &amp;quot;permanent&amp;quot; (i.e. customer static IP addresses), add the IP address in the &amp;quot;## permanent allowed relay (i.e. customer w/ static IP)&amp;quot; section&lt;br /&gt;
For &amp;quot;temporary&amp;quot; (i.e. blackhole listed IP addresses), add the IP address in the &amp;quot;## temporary&amp;quot; section&lt;br /&gt;
&lt;br /&gt;
Also, for temporary IP addresses, we need to send the note to the requester detailing the following:&lt;br /&gt;
a) If the IP address relays Spam/UCE/Viruses through us, it is to be removed permanently.&lt;br /&gt;
b) The customer must follow the steps listed on the blackhole list the IP was listed on to get removed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Flushing DNS cache for a domain ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Due usually to a bad zone or excessively long TTL for a domain, the cache for it will need to be flushed.&lt;br /&gt;
&lt;br /&gt;
On nscache-1:&lt;br /&gt;
# /usr/local/sbin/rndc flushname domain.tld&lt;br /&gt;
&lt;br /&gt;
If cache is still corrupted (i.e. zone lookups either fail or are incorrect), need to stop and restart the caching server:&lt;br /&gt;
&lt;br /&gt;
# /etc/init.d/local.named stop&lt;br /&gt;
# /etc/init.d/local.named start&lt;br /&gt;
&lt;br /&gt;
==== SiteBuilder ====&lt;br /&gt;
http://sitebuilder.nethere.net/&lt;br /&gt;
-URL used to test sitebuilder&lt;br /&gt;
&lt;br /&gt;
http://sitebuilder.nethere.net/Login&lt;br /&gt;
-URL for control panel.  &lt;br /&gt;
&amp;lt;p&amp;gt;This is where the customer also logs in to manage&lt;br /&gt;
their web site and also where you log in to administer sitebuilder.&lt;br /&gt;
Sitebuilder is hosted on sb-2.nethere.net.  The site is designed/built on&lt;br /&gt;
this server and published to unixweb-7.nethere.net.  This is the only server&lt;br /&gt;
that can host a sitebuilder web site.  The admin log in to manage&lt;br /&gt;
sitebuilder is:&lt;br /&gt;
&lt;br /&gt;
Username:  admin&lt;br /&gt;
Password:  N3tH3r31!&lt;br /&gt;
&lt;br /&gt;
==== Cart32 ====&lt;br /&gt;
cart32 information:&lt;br /&gt;
&lt;br /&gt;
Cart32 is hosted on ntweb-6.nethere.net.  That is where all of the&lt;br /&gt;
configuration files are located.  The location of the ini file is&lt;br /&gt;
D:\websites\Cart32cgi/cart32.ini.  This is where ip restrictions to admin&lt;br /&gt;
panel are set, password can be reset for admin, time limit restriction&lt;br /&gt;
reset, etc.&lt;br /&gt;
&lt;br /&gt;
There are four customers that still use cart32:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.cart.simplyweb.net/lab400/cart/c32web.exe&lt;br /&gt;
https://www.cart.simplyweb.net/nutragenics/cart/c32web.exe&lt;br /&gt;
https://www.cart.simplyweb.net/retrogen/cart/c32web.exe&lt;br /&gt;
https://www.cart.simplyweb.net/stonesculptorssupplies/cart/c32web.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The client codes are lab400, nutragenics, retrogen, and&lt;br /&gt;
stonesculptorssupplies.  You can reset their passwords through the admin&lt;br /&gt;
panel.&lt;br /&gt;
&lt;br /&gt;
To administer cart32:&lt;br /&gt;
&lt;br /&gt;
Control Panel:  https://www.cart.simplyweb.net/cart/c32web.exe/Admin&lt;br /&gt;
Username:  administrator&lt;br /&gt;
Password:  N3tH3r31!&lt;br /&gt;
Cart Admin Password:  N3tH3r31!&lt;br /&gt;
&lt;br /&gt;
===  Webmail  ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Removing webmail filters ====&lt;br /&gt;
&lt;br /&gt;
All webmail filters are stored in the MySQL database on webmail-1.&lt;br /&gt;
&lt;br /&gt;
Simple SQL commands:&lt;br /&gt;
&lt;br /&gt;
* Choosing a DB&lt;br /&gt;
mysql&amp;gt; use db_name;&lt;br /&gt;
&lt;br /&gt;
* Showing tables in a DB (must be using a DB)&lt;br /&gt;
mysql&amp;gt; show tables;&lt;br /&gt;
&lt;br /&gt;
* Describing a table&#039;s fields&lt;br /&gt;
mysql&amp;gt; describe table_name;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For NetHere, the database is horde_nh.&lt;br /&gt;
For Simply, the database is horde_si.&lt;br /&gt;
For zNET, the database is horde_zn.&lt;br /&gt;
&lt;br /&gt;
1) Log to MySQL as root, using the normal root password&lt;br /&gt;
  a) # mysql -u root -p&lt;br /&gt;
2) Select the appropriate database&lt;br /&gt;
  a) mysql&amp;gt; use horde_nh&lt;br /&gt;
3) Delete webmail filters.&lt;br /&gt;
  a) mysql&amp;gt; delete from horde_prefs where pref_uid=&#039;&amp;lt;username&amp;gt;&#039; and pref_name=&#039;filters&#039;;&lt;br /&gt;
     Replace &amp;lt;username&amp;gt; with the users login.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Removing Address book entries ====&lt;br /&gt;
&lt;br /&gt;
1) Log in&lt;br /&gt;
2) Select appropriate db&lt;br /&gt;
3) Find object_id, delete&lt;br /&gt;
   a) mysql&amp;gt; select object_id from turba_objects where object_email=&#039;email@domain.tld&#039;;&lt;br /&gt;
   b) mysql&amp;gt; delete from turba_objects where object_id=&#039;object_id_obtained_previously&#039;;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Repairing webmail address book ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) Export (via webmail, if possible) the address book to a csv file.&lt;br /&gt;
&lt;br /&gt;
2) Delete the Turba objects in the mySQL database in the turba_objects table that is assigned to the customer.&lt;br /&gt;
  mysql&amp;gt; delete from turba_objects where owner_id=&#039;username&#039;;&lt;br /&gt;
&lt;br /&gt;
3) Import the saved address book via webmail.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Changing webmail From Information ====&lt;br /&gt;
&lt;br /&gt;
1) Login as the customer (http://webmail.nethere.net)&lt;br /&gt;
&lt;br /&gt;
2) Click on Options&lt;br /&gt;
&lt;br /&gt;
3) Click on Personal Information&lt;br /&gt;
&lt;br /&gt;
4) Select either default identity or a new one and click on Edit Your Identities&lt;br /&gt;
&lt;br /&gt;
=== MySQL     ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All unixweb-## boxes have MySQL processes running on them. DB provisioning is done on the same server as which hosts the domain.&lt;br /&gt;
For Windows 2000 MySQL provisioning, all DBs are placed on ntdb-2.nethere.net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== MySQL DB provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use https://phpmyadmin.nethere.net &lt;br /&gt;
User: root&lt;br /&gt;
PW: &amp;lt;root db pw&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1) Select the server that the DB needs to be hosted on, typically the same server that the website is hosted on.&lt;br /&gt;
2) Create the DB&lt;br /&gt;
  a) Under &amp;quot;MySQL&amp;quot; - &amp;quot;Create new database&amp;quot; - the DB name is typically the website username; click &amp;quot;Create&amp;quot;&lt;br /&gt;
3) Create the User, set permissions for user on DB&lt;br /&gt;
  a) Click &amp;quot;Home&amp;quot; (upper left), then click &amp;quot;Privileges&amp;quot; (Under &amp;quot;MySQL&amp;quot;)&lt;br /&gt;
  b) Click &amp;quot;Add a new User&amp;quot; make sure &amp;quot;Any host&amp;quot; selected, &amp;quot;User name&amp;quot; is typically the website username, &amp;quot;Password&amp;quot; is typically the website password. !LEAVE &amp;quot;Global Privileges&amp;quot; settings deselected! Click &amp;quot;Go&amp;quot; (bottom right corner) when finished.&lt;br /&gt;
  c) Under &amp;quot;Database-specific privileges&amp;quot; be sure to add the user database&lt;br /&gt;
  d) Select DB, then for &amp;quot;Privileges:&amp;quot; make sure the following are checked:&lt;br /&gt;
     Select, Insert, Update, Delete, Create, Alter, Index, Drop, Create Temporary Tables, Lock Tables, References&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
=== List Server    ===&lt;br /&gt;
&lt;br /&gt;
lists-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Mail List provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Go to the /mailman/bin directory:&lt;br /&gt;
	# cd /mailman/bin&lt;br /&gt;
&lt;br /&gt;
2. Create the list on lists-1:&lt;br /&gt;
	# ./newlist -q \&lt;br /&gt;
		listname@lists.domain.com \&lt;br /&gt;
		mailman-owner@lists.domain.com \&lt;br /&gt;
		password &lt;br /&gt;
&lt;br /&gt;
3. Configure the list using default settings:&lt;br /&gt;
	# ./config_list -i /mailman/data/defaultlist.cfg listname&lt;br /&gt;
&lt;br /&gt;
*** IF A NEW DOMAIN FOLLOW INSTRUCTIONS BELOW ***&lt;br /&gt;
&lt;br /&gt;
NOTE: All files in /etc/mail are controlled by RCS&lt;br /&gt;
&lt;br /&gt;
4. Go to the /etc/mail directory:&lt;br /&gt;
	# cd /etc/mail&lt;br /&gt;
&lt;br /&gt;
5. Add the list domain to the mailertable file:&lt;br /&gt;
	lists.domain.com	mailman:lists.domain.com&lt;br /&gt;
&lt;br /&gt;
6. Add the list domain to the relay-domains file.&lt;br /&gt;
&lt;br /&gt;
7. Add the list domain to the virtuserdomain file.&lt;br /&gt;
&lt;br /&gt;
8. Add the following entries to the virtusertable file:&lt;br /&gt;
	mailman@lists.domain.com	&amp;lt;customer_email_address&amp;gt;&lt;br /&gt;
	mailman-owner@lists.domain.com	mailman@lists.domain.com&lt;br /&gt;
&lt;br /&gt;
9. Run make to rebuild all files:&lt;br /&gt;
	# make&lt;br /&gt;
&lt;br /&gt;
10. Run make to restart sendmail:&lt;br /&gt;
	# make restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Backup Server    ===&lt;br /&gt;
&lt;br /&gt;
backup-1 -&amp;gt; odd numberd unix&lt;br /&gt;
backup-2 -&amp;gt; even numbered unix&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On Saturdays, we do a level 0 dump&lt;br /&gt;
the rest are differential&lt;br /&gt;
&lt;br /&gt;
Restoration fees are basically $25 per day restored for email, $150 for website from cancelled archive, $25 a day for website &lt;br /&gt;
&lt;br /&gt;
Restoration is based on customer request. In order to restore a site/mailbox fully, you must start restoration from the previous level 0 backup and continue until the day after the day requested, since backups are performed in the morning.&lt;br /&gt;
&lt;br /&gt;
General rule is to leave the gzipped file available for 24 hours, after that remove it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Site Restoral ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the site was removed by the web_del script, then so long as it is within 30 days since removal, the gzipped site will be available within /www/archive/www.domain.tld-date_removed.tar.gz&lt;br /&gt;
Extract with tar:&lt;br /&gt;
# tar -zxvpf www.domain.tld-date_removed.tar.gz&lt;br /&gt;
&lt;br /&gt;
If the site needs to be restored from backup, here are the steps&lt;br /&gt;
&lt;br /&gt;
1) To determine where backups are stored, go to /nethere/conf/backup&lt;br /&gt;
&lt;br /&gt;
2) grep for the server within the directory to determine the backup directory used&lt;br /&gt;
i.e. &lt;br /&gt;
# grep unixweb-1 *&lt;br /&gt;
&lt;br /&gt;
4) Go to /backup1/unix/unixweb-1.nethere.net&lt;br /&gt;
  a) you&#039;ll see a bunch of directories, named &amp;lt;date&amp;gt;-&amp;lt;dump_level&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5) Since the backups happen in the morning, typically you&#039;ll need to go to the day before, restore&lt;br /&gt;
   e.g. for 11/10&lt;br /&gt;
  a) cd 20041109-3&lt;br /&gt;
  b) restore -if and whatever the filesystem is named&lt;br /&gt;
     (for interactive: add files/dirs, extract, then use 1 for volume #)&lt;br /&gt;
     i.e.&lt;br /&gt;
     restore&amp;gt; add &amp;lt;path_to_dir/files&amp;gt;&lt;br /&gt;
     restore&amp;gt; extract&lt;br /&gt;
     (for volume #: 1)&lt;br /&gt;
     (Set owner: y)&lt;br /&gt;
&lt;br /&gt;
6) After you&#039;ve restored the files, just use scp as root on sawfish to copy them off of backup-# and then onto unixweb-#&lt;br /&gt;
  a) (on sawfish - two step process)&lt;br /&gt;
     # scp backup-1:/tmp/&amp;lt;restored_file&amp;gt; /tmp&lt;br /&gt;
     # scp /tmp/&amp;lt;restored_files&amp;gt; unixweb-1:/tmp&lt;br /&gt;
&lt;br /&gt;
7) Cleanup any restored files off of backup-# server and sawfish&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the newly restored files, following are the steps necessary to restore a site:&lt;br /&gt;
&lt;br /&gt;
1) Restore www.domain.tld directory to /www&lt;br /&gt;
   # mv /www/archive/www.domain.tld /www&lt;br /&gt;
&lt;br /&gt;
2) Restore configuration files&lt;br /&gt;
  a) Apache: mv /www/archive/nethere/conf/apache/(n)vhosts/www.domain.tld /nethere/conf/apache/conf/(n)vhosts/&lt;br /&gt;
  b) Webalizer: mv /www/archive/nethere/conf/webalizer/unix/www.domain.tld /nethere/conf/webalizer/unix&lt;br /&gt;
&lt;br /&gt;
3) Add user to /etc/master.passwd:&lt;br /&gt;
  a) vipw&lt;br /&gt;
     (go to end of file, read in the master.passwd file from /www/www.domain.tld)&lt;br /&gt;
     :r /www/www.domain.tld/master.passwd&lt;br /&gt;
  b) Remove /www/www.domain.tld/master.passwd&lt;br /&gt;
&lt;br /&gt;
4) Change flags on cgi-bin/ directories:&lt;br /&gt;
   # chflags sunlnk /www/www.domain.tld/htdocs/cgi-bin/&lt;br /&gt;
   # chflags schg /www/www.domain.tld/htdocs/cgi-bin/php*&lt;br /&gt;
&lt;br /&gt;
5) Add configuration file entry to nvhosts/vhosts.conf&lt;br /&gt;
   # co -l (n)vhosts.conf&lt;br /&gt;
   # vi (n)vhosts.conf&lt;br /&gt;
   # ci -u (n)vhosts.conf&lt;br /&gt;
&lt;br /&gt;
6) Check apache, restart; verify apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl graceful&lt;br /&gt;
   # &lt;br /&gt;
&lt;br /&gt;
7) Cleanup remaining restored files&lt;br /&gt;
   # rm -r /www/archive/www&lt;br /&gt;
   # rm -r /www/archive/nethere&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8) Enter zone into DNS on ns1.nethere.net&lt;br /&gt;
  a) zone file&lt;br /&gt;
     # mv /named/archive/domain.tld /named/master&lt;br /&gt;
     # mv /named/archive/domain.tld,v /named/master/RCS&lt;br /&gt;
  b) named.master file &lt;br /&gt;
     # co -l /named/named.master&lt;br /&gt;
     (add zone)&lt;br /&gt;
     # ci -u /named/named.master&lt;br /&gt;
     # cd /named; make new-zone&lt;br /&gt;
     # tail /var/log/named   (look for errors)&lt;br /&gt;
*************************************************&lt;br /&gt;
(12:40:47 PM) Henry Chan: restore is now available that is compatible with the 4.4bsd format&lt;br /&gt;
(12:41:04 PM) Henry Chan: to restore, use the following command: restore -c -i -f path_to_archive&lt;br /&gt;
(12:41:08 PM) Henry Chan: the &amp;quot;-c&amp;quot; is what does it&lt;br /&gt;
(12:41:29 PM) Henry Chan: (only applies to backup-1-new... doesn&#039;t work on backup-2 or backup-4)&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==== Email Restoral ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Storage path prefixes:&lt;br /&gt;
	/nfs/1 is sndg-netapp-1 (on backup-4 - /backup/hosts/2/e0.sndg-netapp-1.nethere.net)&lt;br /&gt;
	/nfs/2 is sndg-netapp-2 (on backup-3 - /backup/hosts/2/e0.sndg-netapp-2.nethere.net)&lt;br /&gt;
	/nfs/3 is sndg-netapp-3 (on backup-2 - /backup/hosts/2/e0.sndg-netapp-3.nethere.net)&lt;br /&gt;
        /nfs/4 is sndg-netapp-1 (on backup-2 - /backup/hosts/2/e0.sndg-netapp-1.nethere.net)&lt;br /&gt;
        /nfs/5 is sndg-netapp-2 (on backup-1 - /backup/hosts/2/e0.sndg-netapp-2.nethere.net)&lt;br /&gt;
        /nfs/6 is sndg-netapp-3 (on backup-2 - /backup/hosts/2/e0.sndg-netapp-3.nethere.net)&lt;br /&gt;
&lt;br /&gt;
Looking up a mailbox&#039;s Storage Path:&lt;br /&gt;
	You will need to know which directory and NFS server the customer&#039;s mail is &lt;br /&gt;
stored on. Use the Provisioning Tool to get the Storage Path (find the customer&#039;s mail&lt;br /&gt;
account, then click on the Engineering sub-tab under the Email tab). You should end up&lt;br /&gt;
with something like &amp;quot;/nfs/3/nh/h/t/htchan/Maildir&amp;quot;. Match the prefix of the storage&lt;br /&gt;
directory with an NFS server (see prefixes above).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) If restoring a recently deleted mail account:&lt;br /&gt;
	a) Look for the archive on mailbox-1 in /nfs/archive/mail/{platform}/{username}.{date}-{PID}.tar.gz. &lt;br /&gt;
           If it doesn&#039;t exist, it&#039;s been too long and the only way to get email back is to restore from backup.&lt;br /&gt;
	b) Make sure the account is re-created in Provisioning Tool and look up the storage path &lt;br /&gt;
           (see &amp;quot;Looking up a mailbox&#039;s Storage Path&amp;quot; above).&lt;br /&gt;
	c) Extract the archive to a temporary directory:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /tmp; tar xzvpf /nfs/archive/mail/nh/zella.20090902-7508.tar.gz&lt;br /&gt;
		Sample output:&lt;br /&gt;
			nfs/2/nh/z/e/zella/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/tmp/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/new/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/cur/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/maildirsize&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/tmp/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/new/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/cur/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/maildirfolder&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/tmp/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/new/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/cur/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/maildirfolder&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/tmp/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/new/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/cur/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/maildirfolder&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/courierpop3dsizelist&lt;br /&gt;
		Note:&lt;br /&gt;
			In the above output, notice that the Maildir is located in &amp;quot;nfs/2/nh/z/e/zella&amp;quot;, it will be used in the next step.&lt;br /&gt;
	d) Copy the Maildir files to the new mailbox storage path using tar (tar does a better job at preserving &lt;br /&gt;
           things like symbolic links and permissions than cp or mv):&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /tmp/nfs/2/nh/z/e/zella; tar cf - Maildir | (cd /nfs/3/nh/z/e/zella; tar xvpf -)&lt;br /&gt;
		Output:&lt;br /&gt;
			You should list the files as it is copied. Basically, similar to what you &lt;br /&gt;
                        saw in step &amp;quot;c&amp;quot; above but without the &amp;quot;nfs/2/nh/z/e/zella&amp;quot; prefix.&lt;br /&gt;
		Note:&lt;br /&gt;
			The command format is basically:&lt;br /&gt;
				cd /tmp/{restored-maildir-path-see-note-in-step-c}; tar cf - Maildir | (cd {new-storage-path-without-Maildir}; tar xvpf -)&lt;br /&gt;
	e) Clean up temporary directory:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /tmp; rm -rf nfs&lt;br /&gt;
&lt;br /&gt;
2) If restoring a zfs mailbox (from the last week) to a particular date&#039;s backup:&lt;br /&gt;
        a) login to the correct sndg-netapp-[1/2/3]-new&lt;br /&gt;
        b) get the files from the correct zfs snapshot&lt;br /&gt;
           Sample commands:&lt;br /&gt;
             cd /tank0/mail/.zfs/snapshot/20180215-0/nh/c/h/christineat&lt;br /&gt;
             ls -l&lt;br /&gt;
           cd to the desired directory and copy the files over to the users mailbox&lt;br /&gt;
             cp -p /tank0/mail/.zfs/20180215-0/nh/c/h/christineat/Maildir/cur /tank0/mail/nh/c/h/christineat/Maildir/cur&lt;br /&gt;
&lt;br /&gt;
3) If restoring a mailbox to a particular date&#039;s backup:&lt;br /&gt;
	a) Look up the storage path to the customer&#039;s mailbox and determine which backup server the &lt;br /&gt;
           dump file is on (see &amp;quot;Looking up a mailbox&#039;s Storage Path&amp;quot; and &amp;quot;Storage path prefixes&amp;quot; above).&lt;br /&gt;
	b) Restore the level 0 and any incremental backups (in order) to /tmp on the backup server:&lt;br /&gt;
		Sample commands:&lt;br /&gt;
			cd /tmp&lt;br /&gt;
			restore4x -if /backup/hosts/2/e0.sndg-netapp-2.nethere.net/20090912-0/mail.dump&lt;br /&gt;
			chflags -R 0 /tmp&lt;br /&gt;
			restore4x -if /backup/hosts/2/e0.sndg-netapp-2.nethere.net/20090913-1/mail.dump&lt;br /&gt;
			chflags -R 0 /tmp&lt;br /&gt;
				...skipped repetitive stuff here...&lt;br /&gt;
			restore4x -if /backup/hosts/2/e0.sndg-netapp-2.nethere.net/20090918-6/mail.dump&lt;br /&gt;
			chflags -R 0 /tmp&lt;br /&gt;
		Note:&lt;br /&gt;
			After each restore, we need to recursively remove all flags from /tmp to eliminate &lt;br /&gt;
                       the immutable flag that gets set on the files (this happens only on NetApp dumps... who knows why).&lt;br /&gt;
	c) Create a new tar of the Maildir directory so that it can be copied to mailbox-1 for further processing:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /tmp/nh/z/e/zella; tar cf /tmp/archive.tar Maildir&lt;br /&gt;
	d) Clean up /tmp:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			rm -rf /tmp/nh&lt;br /&gt;
		Note:&lt;br /&gt;
			Depending on the platform, it might be /tmp/nh, /tmp/si, or /tmp/zn.&lt;br /&gt;
	e) Copy the /tmp/archive.tar file to /tmp on mailbox-1.&lt;br /&gt;
	f) On mailbox-1, extract the Maildir archive on top of what they already have:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /nfs/2/nh/z/e/zella; tar xvpf /tmp/archive.tar&lt;br /&gt;
	g) Remove /tmp/archive.tar.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Beginning of old instructions ====&lt;br /&gt;
Use the same techniques as site restoration, with the following exceptions:&lt;br /&gt;
&lt;br /&gt;
1) Since everything is in maildir format, you have to reassmble things in /tmp/&amp;lt;username&amp;gt;, tar it up, and then restore it on any of the mailbox machines&lt;br /&gt;
   a) using the following syntax: &lt;br /&gt;
      # tar -zcpf /tmp/username.tgz yyyymmdd-#/platform yyyymmdd-#/platform yyyymmdd-#/platform&lt;br /&gt;
      i.e.&lt;br /&gt;
      # tar -zcpf /tmp/username.tgz 20050915-5/nh 20050914-4/nh 20050910-0/nh&lt;br /&gt;
&lt;br /&gt;
2) In regards to mail spools, you&#039;ll need to use the chflags command to adjust the flags on the files. For whatever reason, they are stored with a system immutable flag&lt;br /&gt;
   a) Do chflags -R noschg  &amp;lt;dir&amp;gt; on the restored directory before copying the files with sawfish otherwise, the files cannot be deleted:&lt;br /&gt;
      i.e. &lt;br /&gt;
      # chflags -R noschg /backup1/unix/sndg-netapp-1-e2b.nethere.net/20050105-4/&lt;br /&gt;
   b) After tar/gzipping, remove the restored directories&lt;br /&gt;
      i.e&lt;br /&gt;
      # rm -r /backup1/unix/sndg-netapp-1-e2b.nethere.net/20050105-4/nh&lt;br /&gt;
&lt;br /&gt;
3) Can restore quickly with script on mailbox-1 (or mailbox-2) using the NH script &amp;quot;restoremail&amp;quot;&lt;br /&gt;
   a) # /nethere/sbin/restoremail -h for usage&lt;br /&gt;
      i.e. &lt;br /&gt;
      # restoremail -p nh -f nhusername.tgz -n 2 -u nhusername&lt;br /&gt;
*** End of old instructions ***&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Backup locations *&lt;br /&gt;
&lt;br /&gt;
backup-1:&lt;br /&gt;
backup-3.nethere.net&lt;br /&gt;
fpweb-1.nethere.net&lt;br /&gt;
home-1.nethere.net&lt;br /&gt;
koi.nethere.net&lt;br /&gt;
marmaduke.inetworld.net&lt;br /&gt;
ntdb-1.nethere.net&lt;br /&gt;
ntweb-1.nethere.net&lt;br /&gt;
ntweb-3.nethere.net&lt;br /&gt;
ntweb-5.nethere.net&lt;br /&gt;
ntweb-7.nethere.net&lt;br /&gt;
phoenix.nethere.net&lt;br /&gt;
unixweb-1.nethere.net&lt;br /&gt;
unixweb-3.nethere.net&lt;br /&gt;
unixweb-5.nethere.net&lt;br /&gt;
unixweb-7.nethere.net&lt;br /&gt;
shark.nethere.net&lt;br /&gt;
tetra.nethere.net&lt;br /&gt;
wms-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
backup-2:&lt;br /&gt;
andromeda.nethere.net&lt;br /&gt;
backup-4.nethere.net&lt;br /&gt;
eel.nethere.net&lt;br /&gt;
fpweb-2.nethere.net&lt;br /&gt;
ntdb-2.nethere.net&lt;br /&gt;
ntweb-2.nethere.net&lt;br /&gt;
ntweb-4.nethere.net&lt;br /&gt;
ntweb-6.nethere.net&lt;br /&gt;
unixweb-2.nethere.net&lt;br /&gt;
unixweb-4.nethere.net&lt;br /&gt;
unixweb-6.nethere.net&lt;br /&gt;
ds.znet.com&lt;br /&gt;
mx1.znet.com&lt;br /&gt;
mx2.znet.com&lt;br /&gt;
mx3.znet.com&lt;br /&gt;
dmx.znet.com&lt;br /&gt;
la.znet.com&lt;br /&gt;
uf.znet.com&lt;br /&gt;
&lt;br /&gt;
backup-3:&lt;br /&gt;
backup-1.nethere.net&lt;br /&gt;
lists-1.nethere.net&lt;br /&gt;
mailbox-1.mail.nethere.net&lt;br /&gt;
mailbox-3.mail.nethere.net&lt;br /&gt;
mta-1.mail.nethere.net&lt;br /&gt;
mx-1.nethere.net&lt;br /&gt;
nscache-1.nethere.net&lt;br /&gt;
nsrbl-1.nethere.net&lt;br /&gt;
pegasus.nethere.net&lt;br /&gt;
scan-1.mail.nethere.net&lt;br /&gt;
sndg-netapp-2.nethere.net&lt;br /&gt;
relay-1.mail.nethere.net&lt;br /&gt;
webmail-1.mail.nethere.net&lt;br /&gt;
&lt;br /&gt;
backup-4:&lt;br /&gt;
ahi.nethere.net&lt;br /&gt;
backup-2.nethere.net&lt;br /&gt;
cp-1.nethere.net&lt;br /&gt;
cygnus.nethere.net&lt;br /&gt;
dragon.nethere.net&lt;br /&gt;
lisa.nethere.net&lt;br /&gt;
mailbox-2.mail.nethere.net&lt;br /&gt;
mta-2.mail.nethere.net&lt;br /&gt;
news-1.nethere.net&lt;br /&gt;
nscache-2.nethere.net&lt;br /&gt;
nsrbl-2.nethere.net&lt;br /&gt;
sawfish.nethere.net&lt;br /&gt;
scan-2.mail.nethere.net&lt;br /&gt;
scribe.nethere.net&lt;br /&gt;
sndg-netapp-1.nethere.net&lt;br /&gt;
relay-2.mail.nethere.net&lt;br /&gt;
urchin.nethere.net &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Mail servers ===&lt;br /&gt;
==== Organization ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
                                NetHere Mail Servers&lt;br /&gt;
                                ====================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
         Customers              Inbound Mail            Outbound Mail&lt;br /&gt;
     /\          ||                 ||                    /\&lt;br /&gt;
     ||          ||                 || smtp               ||&lt;br /&gt;
     ||          ||                 \/                    ||&lt;br /&gt;
     ||          ||             ServerIron                ||&lt;br /&gt;
     ||          ||                 ||                    ||&lt;br /&gt;
     ||          ||                 ||                    ||&lt;br /&gt;
     ||          ||   smtp          \/                    ||&lt;br /&gt;
     ||          || =========&amp;gt;  mta-1    mta-2  ====&amp;gt;   relay-1  &amp;lt;====  unixweb-* servers&lt;br /&gt;
     ||          ||                 ||   /\             relay-2         ntweb servers&lt;br /&gt;
http ||     pop3 ||                 ||   ||&lt;br /&gt;
     ||     imap ||                 \/   +--------------+&lt;br /&gt;
     ||          ||             ServerIron              |&lt;br /&gt;
     ||          ||                 ||                  |&lt;br /&gt;
     ||          ||                 ||                  |&lt;br /&gt;
     ||          ||                 \/                  |&lt;br /&gt;
     ||          ||             nsrbl-1   nsrbl-2       |&lt;br /&gt;
     ||          ||         (spamhaus, dcc blacklist)   |&lt;br /&gt;
     ||          ||                 ||                  |&lt;br /&gt;
     ||          \/                 \/                  |&lt;br /&gt;
     \/         mailbox-1  &amp;lt;=== scan-1    scan-2        |&lt;br /&gt;
  webmail &amp;lt;===&amp;gt; mailbox-2    (sendmail/amavisd/sophos)  |&lt;br /&gt;
          imap  mailbox-3                               |&lt;br /&gt;
                 /\     /\                              |&lt;br /&gt;
                 ||     ||                              \/&lt;br /&gt;
             nfs ||     +---------------------------&amp;gt;  mta-db&lt;br /&gt;
                 ||                                    (LDAP)&lt;br /&gt;
                 \/&lt;br /&gt;
             sndg-netapp-1&lt;br /&gt;
             sndg-netapp-2&lt;br /&gt;
             sndg-netapp-3&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Overview ====&lt;br /&gt;
mta-1.mail.nethere.net- Primary servers for all inbound and outbound mail.&lt;br /&gt;
mta-1.mail.nethere.net-  Primary servers for all inbound and outbound mail.&lt;br /&gt;
&lt;br /&gt;
nsrbl-1.nethere.net- Realtime blacklist check using dccd and spamhaus.  The mta servers query the ip address of the mail server against the blacklist before transferring the email to the scanning servers.&lt;br /&gt;
&lt;br /&gt;
nsrbl-2.nethere.net&lt;br /&gt;
 &lt;br /&gt;
scan-2.mail.nethere.net- Scans email with Sophos (currently disabled) and Spam Assasin before sending it to the mailbox servers.&lt;br /&gt;
scan-2.mail.nethere.net Scans email with Sophos (currently disabled) and Spam Assasin before sending it to the mailbox servers.&lt;br /&gt;
&lt;br /&gt;
mailbox-1.mail.nethere.net- Front end server for the mail directories.  Queries LDAP server for miscellaneous customer information including the directory the email should be stored in, passwords, spam sensitivity levels, etc. before delivering email to appropriate directory.&lt;br /&gt;
mailbox-2.mail.nethere.net&lt;br /&gt;
mailbox-3.mail.nethere.net&lt;br /&gt;
&lt;br /&gt;
Mail is stored in maildir format:&lt;br /&gt;
&lt;br /&gt;
/nfs/&amp;lt;nfs_number&amp;gt;/&amp;lt;platform&amp;gt;/u/s/username&lt;br /&gt;
&lt;br /&gt;
sndg-netapp-1-new.nethere.net- Solaris system using zfs set up as a nfs mount on mailbox-1, -2, and -3.  Email is stored on these platforms for all customers.&lt;br /&gt;
sndg-netapp-2-new.nethere.net&lt;br /&gt;
sndg-netapp-3-new.nethere.net&lt;br /&gt;
&lt;br /&gt;
mx-1.nethere.net- spooling mail server&lt;br /&gt;
&lt;br /&gt;
mailx-1.nethere.net- all outbound mails originating from a web server is sent out this email server.  All web servers are set to use mailx.nethere.net which resolves (through the serveriron) to either mailx-1.nethere.net and mailx-2.nethere.net.  Since we shut down mailx-2, all outbound mail from the web servers goes through mailx-1.  This was to prevent all the customers email from being blacklisted as a result of a web site being compromised.&lt;br /&gt;
&lt;br /&gt;
mta-db.mail.nethere.net- database server for the mta servers.  All postfix databases are stored here.&lt;br /&gt;
&lt;br /&gt;
webmail.nethere.net- Hosts the webmail program for webmail.nethere.net, webmail.simplyweb.net and webmail.znet.net.&lt;br /&gt;
&lt;br /&gt;
==== Client mail settings ====&lt;br /&gt;
&lt;br /&gt;
Inbound Mail Server:&lt;br /&gt;
&lt;br /&gt;
POP3&lt;br /&gt;
pop3.nethere.net&lt;br /&gt;
pop3.znet.net&lt;br /&gt;
pop3.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
IMAP&lt;br /&gt;
imap.nethere.net&lt;br /&gt;
imap.znet.net&lt;br /&gt;
imap.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
Outbound Mail server:&lt;br /&gt;
&lt;br /&gt;
mail.nethere.net&lt;br /&gt;
mail.znet.net&lt;br /&gt;
mail.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
Webmail:&lt;br /&gt;
&lt;br /&gt;
webmail.nethere.net&lt;br /&gt;
webmail.znet.net&lt;br /&gt;
webmail.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
Control Panel:&lt;br /&gt;
&lt;br /&gt;
cp.nethere.net&lt;br /&gt;
cp.znet.net&lt;br /&gt;
cp.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
==== nsrbl Disk Quota Warning ====&lt;br /&gt;
&lt;br /&gt;
In the event you get a nagios regarding space on the two nsrbl servers, &lt;br /&gt;
it is most likely the result of the dccd databases getting too large.  &lt;br /&gt;
To clear out space, log into each nsrbl server and execute the following:&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
&lt;br /&gt;
cd /var/dcc/libexec&lt;br /&gt;
&lt;br /&gt;
./stop-dccd&lt;br /&gt;
&lt;br /&gt;
cd /var/dcc&lt;br /&gt;
&lt;br /&gt;
rm dcc_db*&lt;br /&gt;
&lt;br /&gt;
sync ; sync&lt;br /&gt;
&lt;br /&gt;
cd /var/dcc/libexec&lt;br /&gt;
&lt;br /&gt;
./start-dccd&lt;br /&gt;
&lt;br /&gt;
ps auxw | grep dccd&lt;br /&gt;
&lt;br /&gt;
====Update outbound quota for a single ip address====&lt;br /&gt;
&lt;br /&gt;
We restrict the total number of emails that a customer can send to 2000 email recepients per 24 hour period.  This is tracked by the ip address of the computer that connects to the mta server when it sends an outbound emal.  The ip address and count is stored in the postfix database server, mta-db.mail.nethere.net.  To increase the quota for a single ip address, you can log into either mta-1, mta-2, or console direcly into the database server.  To use one of the mta servers:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1.  Log onto mta-1.mail&lt;br /&gt;
2.  Switch to mysql&lt;br /&gt;
$mysql -h 10.0.0.69 -u policyd_outbound -p&lt;br /&gt;
username:  policyd_outbound&lt;br /&gt;
password:  p0stf1x!&lt;br /&gt;
3.  Review database information&lt;br /&gt;
mysql&amp;gt;show databases;&lt;br /&gt;
4.  Switch to the policyd table. &lt;br /&gt;
mysql&amp;gt;use policyd;&lt;br /&gt;
5.  Review table header information&lt;br /&gt;
mysql&amp;gt;show tables;&lt;br /&gt;
mysql&amp;gt;describe throttle;&lt;br /&gt;
6.  Verify the ip address is above quota&lt;br /&gt;
mysql&amp;gt;select * from throttle where _from=&#039;xxx.xxx.xxx.xxx&#039;;   #this is the ip address of the account you are increasing.&lt;br /&gt;
7.  Increase the maxium number of recepients the ip address can send to&lt;br /&gt;
mysql&amp;gt;update throttle SET _rcpt_max = _rcpt_max + 20000 where _from=&#039;24.249.205.66&#039;;&lt;br /&gt;
8.  Verify&lt;br /&gt;
mysql&amp;gt;select * from throttle where _from=&#039;xxx.xxx.xxx.xxx&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Remove a blocked mail server ====&lt;br /&gt;
&lt;br /&gt;
The message in maillog helps identify where it is blocked.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Relaying denied due to excessive spam                             admin-2:/dist/files/nhmta/nethere/conf/postfix/common/client_reject&lt;br /&gt;
Sender address rejected: Relaying denied due to Spam              admin-2:/dist/files/nhmta/nethere/conf/postfix/common/sender_reject&lt;br /&gt;
Relaying denied due to SPAM                                       policyd???&lt;br /&gt;
Client host rejected: ... listed at zen.spamhaus.org=127.0.0.2    nsrbl-1:/named/rbldns/zones/*.spamhaus.org&lt;br /&gt;
status=sent (250 2.7.1 Ok, discarded, id=95886-33 - SPAM)         user mail protection in provisioning????&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Remove a mail server from policyd blacklist====&lt;br /&gt;
&lt;br /&gt;
The ip address of a mail server can be placed on a blacklist for several reasons, to high of rate of inbound email from a single ip address, incorrectly formatted helo, etc.  In the event you need to remove a mail server&#039;s ip from the policyd blacklist, you can log into either mta server of connect directly to the policyd database server, mta-db.mail.nethere.net.  To remove the ip address using a mta server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1.  Log onto mta-1.mail&lt;br /&gt;
2.  Switch to mysql&lt;br /&gt;
$mysql -h 10.0.0.69 -u policyd_inbound -p policyd&lt;br /&gt;
username:  policyd_inbound&lt;br /&gt;
password:  p0stf1x!&lt;br /&gt;
3.  Display database information&lt;br /&gt;
mysql&amp;gt;show databases;&lt;br /&gt;
4.  Use policyd database&lt;br /&gt;
mysql&amp;gt;use policyd;&lt;br /&gt;
4.  Display table information&lt;br /&gt;
mysql&amp;gt;show tables;&lt;br /&gt;
mysql&amp;gt;describe blacklist;&lt;br /&gt;
5.  Verify ip address has been blacklisted&lt;br /&gt;
mysql&amp;gt;select * from blacklist where _blacklist=&#039;xxx.xxx.xxx.xxx&#039;; &lt;br /&gt;
6.  Delete the entry&lt;br /&gt;
mysql&amp;gt;delete from blacklist where _blacklist=&#039;xxx.xxx.xxx.xxx&#039;;&lt;br /&gt;
7.  Verify&lt;br /&gt;
mysql&amp;gt; select * from blacklist where _blacklist=&#039;xxx.xxx.xxx.xxx&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The suspected mail server may also be in the helo table.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mysql&amp;gt; describe helo ;&lt;br /&gt;
1.  Check the IP address&lt;br /&gt;
mysql&amp;gt; select * from helo where _host=&#039;xxx.xxx.xxx.xxx&#039; ;&lt;br /&gt;
2.  Check the server helo name &lt;br /&gt;
mysql&amp;gt; select * from helo where _helo like &#039;%server%&#039; ;&lt;br /&gt;
3.  Delete the entry&lt;br /&gt;
mysql&amp;gt; delete from helo where _host=&#039;xxx.xxx.xxx.xxx&#039; ;&lt;br /&gt;
mysql&amp;gt; delete from helo where _helo like &#039;%server%&#039; ;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Remove a mail server from spamhaus blacklist====&lt;br /&gt;
&lt;br /&gt;
ssh to nsrbl-1.mail  and nsrbl-2.mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /named/rbldns/zones &lt;br /&gt;
&lt;br /&gt;
find which file the IP Address is in.&lt;br /&gt;
&lt;br /&gt;
grep &amp;lt;IP Addr&amp;gt; *.spamhaus.org&lt;br /&gt;
co -l &amp;lt;xxx.spamhaus.org&amp;gt;&lt;br /&gt;
vi xxx.spamhaus.org&lt;br /&gt;
ci -u &amp;lt;xxx.spamhaus.org&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Add ip address to client_reject====&lt;br /&gt;
&lt;br /&gt;
The client_reject list is one of the ways we combat spam.  If we receive spam compaints from a mail server or otherwise identitfy a mail server as a source of spam, we can add it to a client_reject list.  An email sent from a mail server on the client_reject list is rejected with a notice saying &amp;quot;Relaying denied due to excessive spam&amp;quot;. To add an ip address to the list, we use admin-2.  Once the list is updated on admin-2, it is pushed out to mta-1 and mta-2.  To add an ip address on admin-2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Log into admin-2.nethere.net.&lt;br /&gt;
bash-3.00# cd /dist/files/nhmta/nethere/conf/postfix/common&lt;br /&gt;
bash-3.00# co -l client_reject&lt;br /&gt;
RCS/client_reject,v  --&amp;gt;  client_reject&lt;br /&gt;
bash-3.00# vi client_reject&lt;br /&gt;
&lt;br /&gt;
-Add the ip address to the bottom of the list in the appropriate format.  It you want to reject any email from the mail server, you would use:&lt;br /&gt;
xxx.xxx.xxx      REJECT Relaying denied due to excessive spam&lt;br /&gt;
where xxx.xxx.xxx is the /24 subnet of the mail server the spam originated from.  Do not place a period after the last octet.  For example, to add a /16 subnet you would use:&lt;br /&gt;
xxx.xxx          REJECT Relaying denied due to excessive spam&lt;br /&gt;
The REJECT key word tells postfix to reject the email.  If you want to allow email from a /24 subnet, use the following format:&lt;br /&gt;
xxx.xxx.xxx      OK&lt;br /&gt;
This tells postfix to accept email from that subnet.&lt;br /&gt;
&lt;br /&gt;
bash-3.00# ci -u client_reject&lt;br /&gt;
RCS/client_reject,v  &amp;lt;--  client_reject&lt;br /&gt;
new revision: 1.286; previous revision: 1.285&lt;br /&gt;
enter log message, terminated with single &#039;.&#039; or end of file:&lt;br /&gt;
&amp;gt;&amp;gt; .&lt;br /&gt;
done&lt;br /&gt;
bash-3.00# make&lt;br /&gt;
/nethere/software/nhmta/postfix-2.4.5/sbin/postmap -C . client_reject&lt;br /&gt;
mv client_reject.db maps/hash/client_reject.db&lt;br /&gt;
bash-3.00# cd /dist/rdist&lt;br /&gt;
bash-3.00# gmake nhmta-update&lt;br /&gt;
updating host mta-1.mail.nethere.net&lt;br /&gt;
updating: /dist/files/nhmta//nethere/conf/postfix/common/maps/hash/client_reject.db&lt;br /&gt;
updating: /dist/files/nhmta//nethere/conf/postfix/common/client_reject&lt;br /&gt;
updating host mta-2.mail.nethere.net&lt;br /&gt;
updating: /dist/files/nhmta//nethere/conf/postfix/common/maps/hash/client_reject.db&lt;br /&gt;
updating: /dist/files/nhmta//nethere/conf/postfix/common/client_reject&lt;br /&gt;
bash-3.00#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Free mysql disk space on mta-db.mail.nethere.net====&lt;br /&gt;
&lt;br /&gt;
In response to nagios alert &amp;quot;[Nagios] PROBLEM alert - mta-db.mail.nethere.net/Disk mysql is WARNING&amp;quot;, to clear space within the mysql database:&lt;br /&gt;
&lt;br /&gt;
1.  ssh to mta-db.mail.nethere.net.&lt;br /&gt;
&lt;br /&gt;
2.  Log into mysql and execute the following:&lt;br /&gt;
&lt;br /&gt;
[root@mta-db user]# mysql -uroot -p&lt;br /&gt;
&lt;br /&gt;
Enter password:  dB@dm1N!&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; show databases;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; use policyd;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; show tables;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; describe helo;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; describe throttle;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; describe throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from throttle_from_instance limit 10;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select now();&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select unix_timestamp(now());&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select count(*) from throttle_from_instance where _expire &amp;gt; 1426534697;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select max(_expire) from throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; truncate table throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select count(*) from throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; show tables;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; optimize table throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; show databases;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; use policyd;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; quit&lt;br /&gt;
&lt;br /&gt;
[root@mta-db user]# df -h&lt;br /&gt;
&lt;br /&gt;
== Errors ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Common webmail errors ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Error:&lt;br /&gt;
--cut--&lt;br /&gt;
Fatal error: Call to undefined function: applicatio€”¬p() in&lt;br /&gt;
/www/webmail.nethere.net/htdocs/x/m/templates/message/navbar.inc on line 7&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
Solution: Restart apache on webmail-1 (apachectl restart)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Error:&lt;br /&gt;
--cut--&lt;br /&gt;
ERROR There was an error sending your message: unable to add recipient&lt;br /&gt;
[webhosting@nethere.com]: Invalid response code received from server&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
Solution: More than likely DNS/domain issue, but check mta-1.nethere.net to be sure:&lt;br /&gt;
--cut--&lt;br /&gt;
Aug 30 15:08:54 mta-1 nh/smtpd[50501]: NOQUEUE: reject: RCPT from webmail-1.mail.nethere.net[66.63.128.181]: 450 &amp;lt;orders@mulligrins.com&amp;gt;: Sender address rejected: Domain not found; from=&amp;lt;orders@mulligrins.com&amp;gt; to=&amp;lt;webhosting@nethere.com&amp;gt; proto=ESMTP helo=&amp;lt;webmail.nethere.net&amp;gt;&lt;br /&gt;
--cut--&lt;br /&gt;
Problem: Webmail shows &amp;quot;1-5&amp;quot; messages but inbox appears empty. POP shows messages #1,2,3,4,5... could not be retrieved... server response error cannot open the message file it&#039;s gone.&lt;br /&gt;
&lt;br /&gt;
Solution: Ownership issue&lt;br /&gt;
&lt;br /&gt;
1) Use the provisioning tool to determine mailbox location:&lt;br /&gt;
  a) Click on email account, Engineering&lt;br /&gt;
  b) Location is the &amp;quot;Storage Path:&amp;quot; &lt;br /&gt;
&lt;br /&gt;
2) Log in to either mail server then change ownership of maildir to mailuser:mailuser&lt;br /&gt;
  # chown -R mailuser:mailuser /nfs/#/platform/u/s/username&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DEPRECIATED INFORMATION ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Webservers ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Allowing SSI (Server Side Includes) ====&lt;br /&gt;
&lt;br /&gt;
***** NOTE: This has been DEPRECIATED, all servers now allow SSI by default *****&lt;br /&gt;
&lt;br /&gt;
We do not allow EXEC permissions for SSI.&lt;br /&gt;
&lt;br /&gt;
1) Check out Apache config for www.domain.tld in /nethere/conf/apache/conf/&amp;lt;vhosts,nvhosts&amp;gt;&lt;br /&gt;
   # co -l www.domain.tld(.common)  &lt;br /&gt;
&lt;br /&gt;
2) Add the following lines under the &amp;lt;Directory&amp;gt; directive&lt;br /&gt;
--cut--&lt;br /&gt;
AddType text/html .shtml&lt;br /&gt;
AddHandler server-parsed .shtml&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
3) Edit the &amp;quot;Options&amp;quot; line to be the following:&lt;br /&gt;
--cut--&lt;br /&gt;
Options Indexes FollowSymLinks IncludesNOEXEC&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
4) Add a DirectoryIndex line above the &amp;lt;Directory&amp;gt; directive with the following:&lt;br /&gt;
--cut--&lt;br /&gt;
DirectoryIndex index.shtml index.html index.htm home.html home.shtml index.php&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
** In summary, the config should look something similar to this **&lt;br /&gt;
--cut--&lt;br /&gt;
DocumentRoot &amp;quot;/www/www.domain.tld/htdocs&amp;quot;&lt;br /&gt;
  DirectoryIndex index.shtml index.html index.htm home.html home.shtml &lt;br /&gt;
  &amp;lt;Directory &amp;quot;/www/www.domain.tld/htdocs&amp;quot;&amp;gt;&lt;br /&gt;
    AddType text/html .shtml&lt;br /&gt;
    AddHandler server-parsed .shtml&lt;br /&gt;
    Options Indexes FollowSymLinks IncludesNOEXEC&lt;br /&gt;
    Order allow,deny&lt;br /&gt;
    Allow from all&lt;br /&gt;
  &amp;lt;/Directory&amp;gt;&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
5) Check in Apache config&lt;br /&gt;
   # ci -u www.domain.tld(.common) &amp;lt; /dev/null  &lt;br /&gt;
&lt;br /&gt;
6) Restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl restart&lt;br /&gt;
&lt;br /&gt;
More SSI notes can be found here:&lt;br /&gt;
http://httpd.apache.org/docs/1.3/howto/ssi.html&lt;br /&gt;
&lt;br /&gt;
= SimplyInternet =&lt;br /&gt;
&lt;br /&gt;
= Z-net =&lt;br /&gt;
&lt;br /&gt;
= Migrating to LAMP Host Servers =&lt;br /&gt;
&lt;br /&gt;
Below are outlines of the steps to migrate website &amp;amp; email hosting from nethere to lamphost configured servers. This is almost certainly incomplete at this stage. The exact steps to follow can vary depending on specifics of the client, so use this as a guide only. Be ready to adjust commands or add steps as needed.&lt;br /&gt;
&lt;br /&gt;
== Edit Nethere DNS ==&lt;br /&gt;
&lt;br /&gt;
ssh user@admin-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
sudo -i&lt;br /&gt;
&lt;br /&gt;
ssh ns1.nethere.net&lt;br /&gt;
&lt;br /&gt;
cd /named/master&lt;br /&gt;
&lt;br /&gt;
co -l example.com&lt;br /&gt;
&lt;br /&gt;
vi example.com&lt;br /&gt;
&lt;br /&gt;
ci -u example.com&lt;br /&gt;
&lt;br /&gt;
rndc reload example.com&lt;br /&gt;
&lt;br /&gt;
Make sure to increment serial number by 1 each time a change is made.&lt;br /&gt;
&lt;br /&gt;
== Migrate Website to nh3.jcihosting.net ==&lt;br /&gt;
&lt;br /&gt;
Set TTL to 300 on ns1.nethere.net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Open up these web admins to get current account/site info:&lt;br /&gt;
&lt;br /&gt;
http://toolbox.nethere.net/ -&amp;gt; Provisioning Tool (search for domain name)&lt;br /&gt;
&lt;br /&gt;
https://secure.johncompanies.com/mgmt/index.html (search for col#####)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1.  Create the lamphost user.&lt;br /&gt;
&lt;br /&gt;
https://www.lamphost.com/admin/user/user/create&lt;br /&gt;
&lt;br /&gt;
username: col#####&lt;br /&gt;
&lt;br /&gt;
email: same as listed for col##### account&lt;br /&gt;
&lt;br /&gt;
password: random, secure, won&#039;t be shared with client&lt;br /&gt;
&lt;br /&gt;
defaults on rest of page&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2.  Create the customer account&lt;br /&gt;
&lt;br /&gt;
https://nh1.jcihosting.net:8443/&lt;br /&gt;
&lt;br /&gt;
https://nh2.jcihosting.net:8443/&lt;br /&gt;
&lt;br /&gt;
https://nh3.jcihosting.net:8443/&lt;br /&gt;
&lt;br /&gt;
https://nh4.jcihosting.net:8443/&lt;br /&gt;
&lt;br /&gt;
LAMP Host Users -&amp;gt; Add User -&amp;gt; col#####&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Virtual Hosts -&amp;gt; Add Virtual Host&lt;br /&gt;
&lt;br /&gt;
Username: Select same username as above from pulldown&lt;br /&gt;
&lt;br /&gt;
Domain: example.com&lt;br /&gt;
&lt;br /&gt;
Hosting Plan: select plan that matches what&#039;s listed in https://secure.johncompanies.com/mgmt/view.html?cid=col#####&lt;br /&gt;
&lt;br /&gt;
Add default DNS: No&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.  Virtual Hosts -&amp;gt; Site Manager (next to domain added)&lt;br /&gt;
&lt;br /&gt;
Site Manager Users -&amp;gt; Add User&lt;br /&gt;
&lt;br /&gt;
Username: can be anything - I typically use Optigold Login from nethere admin&lt;br /&gt;
&lt;br /&gt;
Password: make up something secure and record - this one will go to the user&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.  Email &amp;amp; FTP -&amp;gt; Add Account&lt;br /&gt;
&lt;br /&gt;
Email Account: ftp&lt;br /&gt;
&lt;br /&gt;
Password: make up something secure and record - this one will go to the user&lt;br /&gt;
&lt;br /&gt;
Enable FTP Access: Yes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5.  Optional: Add MySQL database if site needs it (WordPress etc)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ssh to nh3.jcihosting.net and run &#039;add_mysql.php &amp;lt;example.com&amp;gt;&#039;&lt;br /&gt;
&lt;br /&gt;
MySQL database info will be emailed to webmaster@lamphost.com (and output on command line - ignore instructions to run additional commands)&lt;br /&gt;
&lt;br /&gt;
https://mail.lamphost.com&lt;br /&gt;
Sm77DdnQ&lt;br /&gt;
&lt;br /&gt;
6.  Copy over website&lt;br /&gt;
&lt;br /&gt;
ssh nh3&lt;br /&gt;
&lt;br /&gt;
cd /var/www/example.com/&lt;br /&gt;
&lt;br /&gt;
rsync -v --archive --one-file-system --delete --delete-during --rsh=/usr/bin/ssh dsmith@unixweb-10.nethere.net:/www/www.example.com/htdocs ./&lt;br /&gt;
&lt;br /&gt;
Change &#039;matt&#039; to your username. Note that I had to add my username to groups httpd &amp;amp; webuser to allow me to copy website files without being root.&lt;br /&gt;
&lt;br /&gt;
chown -R col#####:col##### htdocs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7.  If site has MySQL data find the connection info. For example, WP sites will have the info in htdocs/wp-config.php&lt;br /&gt;
&lt;br /&gt;
mysqldump -h mysqldb-1.webhost.nethere.net -u username --password=password databasename &amp;gt; databse.sql&lt;br /&gt;
&lt;br /&gt;
mysql exampledotcom &amp;lt; database.sql&lt;br /&gt;
&lt;br /&gt;
rm database.sql&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8.  If site is ready to be made live on nh3.lamphost.net then update DNS on ns1.nethere.com&lt;br /&gt;
&lt;br /&gt;
Send email to client with details. Search support@jcihosting.com Sent folder for emails with Subject &amp;quot;updated hosting for&amp;quot; for example emails. You will have to adjust as needed (username/passwords, different info based on specific client).&lt;br /&gt;
&lt;br /&gt;
9.  After you finish, update the johncompanies.com backend for the col0# CID and change the nethere hosting server to nh3.jcihosting.net.&lt;br /&gt;
&lt;br /&gt;
10. To add a SSL certificate&lt;br /&gt;
        letsencrypt-create.php &amp;lt;domain.com&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    To remove a SSL certificate&lt;br /&gt;
        del-letsencrypt-certificate.php &amp;lt;domain.com&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Migrate Email to nh3.jcihosting.net ==&lt;br /&gt;
&lt;br /&gt;
If migrating website &amp;amp; emails both follow the above steps to transfer the website first.&lt;br /&gt;
&lt;br /&gt;
If migrating emails only the follow the first steps 1-3 of the above steps to set up the virtualhost on nh3.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1.  Set TTL for MX record to 300.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2.  Generate email report:&lt;br /&gt;
&lt;br /&gt;
Log in to http://toolbox.nethere.net/&lt;br /&gt;
&lt;br /&gt;
Select Provisioning Tool and search for the domain&lt;br /&gt;
&lt;br /&gt;
Navigate to the Email section and then Generate Report.&lt;br /&gt;
&lt;br /&gt;
Select these fields for the report:&lt;br /&gt;
&lt;br /&gt;
Mail address&lt;br /&gt;
&lt;br /&gt;
Alias&lt;br /&gt;
&lt;br /&gt;
Forwarding address&lt;br /&gt;
&lt;br /&gt;
Mail storage path&lt;br /&gt;
&lt;br /&gt;
Copy the result of the report and save it as a text file in /tmp/ on nh3.jcihosting.net named example.com.txt (replace domain name)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.  Create the e-mail accounts&lt;br /&gt;
&lt;br /&gt;
Make sure the virtualhost has enough email accounts allocated before proceeding. &lt;br /&gt;
Edit the virtualhosts.virtualhost_settings table directly if need be to increase allocation.&lt;br /&gt;
&lt;br /&gt;
    Log in here: https://nh3.jcihosting.net:8443/&lt;br /&gt;
    Select &amp;quot;MySQL.LH&amp;quot;&lt;br /&gt;
    Go to the &amp;quot;virtualhosts&amp;quot; database&lt;br /&gt;
    Search the &amp;quot;virtualhost_settings&amp;quot; settings for &#039;example.com&#039; &lt;br /&gt;
       and adjust the &#039;email_accounts&#039; setting.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run this script once and only once:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/bin/migrate_mail_accounts_from_nethere.php example.com&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That will create all email accounts and aliases.&lt;br /&gt;
&lt;br /&gt;
A file containing the list of email accounts and their new passwords will &lt;br /&gt;
be created as /tmp/example.com-passwords.txt. Grab a copy of that for &lt;br /&gt;
sending to the client.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Sync the emails&lt;br /&gt;
&lt;br /&gt;
Another file will be created that contains the bash commands to sync emails from the old server. This will be named /tmp/example.com-sync.sh&lt;br /&gt;
&lt;br /&gt;
Copy the file somewhere like the /root/ home dir and add &amp;quot;#!/bin/bash&amp;quot; as the first line and change perms to 750. Now you can run this script to sync all mails from nethere to nh3. Run it immediately to get an initial sync of emails.&lt;br /&gt;
&lt;br /&gt;
When ready to switch email hosting over run the sync script one more time, then update DNS on nh3.nethere.net.&lt;br /&gt;
&lt;br /&gt;
If you&#039;d like to sync again after the move (if there was a delay or you think there may have been incoming emails during the transtion) remove the &#039;--del&#039; option from the sync script and then re-run.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5.  Send email to customer&lt;br /&gt;
&lt;br /&gt;
Search support@jcihosting.com Sent folder for messages with Subject &amp;quot;new email hosting for&amp;quot; for examples of emails to use for communicating with the client.&lt;br /&gt;
&lt;br /&gt;
6.  Update DNS and put NetHere mail on hold&lt;br /&gt;
Once DNS has been pointed to the new server, go into the NetHere Toolbox http://toolbox.nethere.net/ and go to Domain -&amp;gt; General and then set Status to &#039;On Hold&#039;.&lt;br /&gt;
&lt;br /&gt;
7.  Update host in JohnCompanies database.&lt;br /&gt;
&lt;br /&gt;
== Moving between Host Servers on Lamphost ==&lt;br /&gt;
&lt;br /&gt;
You should delete the old virtual host before creating the new&lt;br /&gt;
virtual host.  It may take up to an hour to delete the old virtual&lt;br /&gt;
host.  To speed up the delete:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /opt/lamphost/cron-bin/del_virtualhosts.php&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you create the new virtual host before, you will need to update&lt;br /&gt;
the mylampsite.com:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /opt/lamphost/bin/update_mylampsite_subdomain.php databaseproviders.com&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Upgrade a Lamphost server to increase database limit ==&lt;br /&gt;
&lt;br /&gt;
I&#039;ve increased the database limit for this customer to 2. Now they can add a second db via the Site Manager admin.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Here is how I made the adjustment (we don&#039;t have a friendly admin for this function):&lt;br /&gt;
Log in to Server Manager: https://nh3.jcihosting.net:8443/&lt;br /&gt;
Select the &amp;quot;MySQL.LH&amp;quot; phpMyAdmin&lt;br /&gt;
Select &#039;virtualhosts&#039; db.&lt;br /&gt;
Adjust the &#039;databases&#039; setting for the argee.com domain in the &#039;virtualhost_settings&#039; table.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== looking in the mail log files ==&lt;br /&gt;
&lt;br /&gt;
exigrep&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2238</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2238"/>
		<updated>2020-07-02T19:52:11Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Install a FreeBSD VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a Linux VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  Make sure the customer acct (col0XXXX) is&lt;br /&gt;
in a group to allow sudo.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a FreeBSD VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will &lt;br /&gt;
create a new VPS. on gn1, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance info &amp;lt;name&amp;gt; | grep -i connection&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use your favorite VNC client (I use Tiger VNC). to connect &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer &lt;br /&gt;
account (col0XXXX) as well. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Make sure the customer acct (col0XXXX) is in a &lt;br /&gt;
group to allow sudo (wheel). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
  (change onifconsole to on for ttyu0).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti restart a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reboot &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
== Ganeti login to a VPS ==&lt;br /&gt;
&lt;br /&gt;
To login to a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance console &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use jci and appropriate Ganeti jci password.&lt;br /&gt;
&lt;br /&gt;
or you can ssh to the VPS name or IP address.&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2237</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2237"/>
		<updated>2020-06-26T01:56:54Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Install a FreeBSD VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a Linux VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  Make sure the customer acct (col0XXXX) is&lt;br /&gt;
in a group to allow sudo.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a FreeBSD VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will &lt;br /&gt;
create a new VPS. on gn1, do&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance info &amp;lt;name&amp;gt; | grep -i connection&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use your favorite VNC client (I use Tiger VNC). to connect &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer &lt;br /&gt;
account (col0XXXX) as well. Make sure the customer acct (col0XXXX) is in a &lt;br /&gt;
group to allow sudo. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti restart a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reboot &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
== Ganeti login to a VPS ==&lt;br /&gt;
&lt;br /&gt;
To login to a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance console &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use jci and appropriate Ganeti jci password.&lt;br /&gt;
&lt;br /&gt;
or you can ssh to the VPS name or IP address.&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2236</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2236"/>
		<updated>2020-06-18T07:14:34Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Re-Intsall a VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a Linux VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  Make sure the customer acct (col0XXXX) is&lt;br /&gt;
in a group to allow sudo.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a FreeBSD VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will &lt;br /&gt;
create a new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer &lt;br /&gt;
account (col0XXXX) as well. Make sure the customer acct (col0XXXX) is in a &lt;br /&gt;
group to allow sudo. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti restart a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reboot &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
== Ganeti login to a VPS ==&lt;br /&gt;
&lt;br /&gt;
To login to a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance console &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;carriage return&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use jci and appropriate Ganeti jci password.&lt;br /&gt;
&lt;br /&gt;
or you can ssh to the VPS name or IP address.&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2232</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2232"/>
		<updated>2020-06-09T22:02:19Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Free up space on Other Servers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== Check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers.  To get into the servers, login to&lt;br /&gt;
admin-1.nethere.net and su - to root.&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
To clean up outgoing queues of unwanted SPAM on relay-1 and relay-2.&lt;br /&gt;
&lt;br /&gt;
  mail_cleanup [ &amp;lt;sender&#039;s domain/username/message id&amp;gt; ... ]&lt;br /&gt;
&lt;br /&gt;
To just remove emails from some senders.&lt;br /&gt;
&lt;br /&gt;
  rmmails &amp;lt;sender&#039;s domain/username/message id&amp;gt; ...&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
=== Cacti ===&lt;br /&gt;
&lt;br /&gt;
Check bandwidth usage on servers&lt;br /&gt;
&lt;br /&gt;
  https://cacti.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on Other Servers ==&lt;br /&gt;
&lt;br /&gt;
Many servers start to run out of disk space over time.  Often it is caused by unread mail for&lt;br /&gt;
root or log files.  &lt;br /&gt;
&lt;br /&gt;
To find the source of the problems, you use &amp;quot;du&amp;quot; to find where the disk space is being used.&lt;br /&gt;
You can&#039;t do a du on /proc or /dev, si I use the command&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@virt11 /]# du -hs [a-c]* deprecated [e-o]* [q-u]* var | tee duhs0&lt;br /&gt;
&lt;br /&gt;
which produces something like this.&lt;br /&gt;
&lt;br /&gt;
4.0K    backup&lt;br /&gt;
4.0K    backup1&lt;br /&gt;
4.0K    backup2&lt;br /&gt;
4.0K    backup3&lt;br /&gt;
4.0K    backup4&lt;br /&gt;
7.5M    bin&lt;br /&gt;
47M     boot&lt;br /&gt;
4.0K    deprecated&lt;br /&gt;
92M     etc&lt;br /&gt;
30M     home&lt;br /&gt;
8.0K    initrd&lt;br /&gt;
541M    lib&lt;br /&gt;
16K     lost+found&lt;br /&gt;
8.0K    media&lt;br /&gt;
0       misc&lt;br /&gt;
8.0K    mnt&lt;br /&gt;
0       net&lt;br /&gt;
92M     opt&lt;br /&gt;
336K    root&lt;br /&gt;
36M     sbin&lt;br /&gt;
8.0K    selinux&lt;br /&gt;
8.0K    srv&lt;br /&gt;
0       sys&lt;br /&gt;
4.0K    test&lt;br /&gt;
16K     tmp&lt;br /&gt;
1.2G    usr&lt;br /&gt;
583M    var&lt;br /&gt;
&lt;br /&gt;
In this case it looks like /var is the problem, so&lt;br /&gt;
&lt;br /&gt;
cd /var&lt;br /&gt;
du -hs * | tee duhs9&lt;br /&gt;
&lt;br /&gt;
Produces&lt;br /&gt;
&lt;br /&gt;
12K     account&lt;br /&gt;
2.6M    analog-5.32&lt;br /&gt;
63M     cache&lt;br /&gt;
24K     db&lt;br /&gt;
4.0K    duhs&lt;br /&gt;
4.0K    duhs1&lt;br /&gt;
4.0K    duhs2&lt;br /&gt;
4.0K    duhs3&lt;br /&gt;
4.0K    duhs4&lt;br /&gt;
4.0K    duhs5&lt;br /&gt;
4.0K    duhs6&lt;br /&gt;
4.0K    duhs7&lt;br /&gt;
4.0K    duhs8&lt;br /&gt;
32K     empty&lt;br /&gt;
8.0K    games&lt;br /&gt;
16K     kerberos&lt;br /&gt;
42M     lib&lt;br /&gt;
8.0K    local&lt;br /&gt;
36K     lock&lt;br /&gt;
457M    log&lt;br /&gt;
0       mail&lt;br /&gt;
8.0K    nis&lt;br /&gt;
8.0K    opt&lt;br /&gt;
8.0K    preserve&lt;br /&gt;
8.0K    racoon&lt;br /&gt;
240K    run&lt;br /&gt;
18M     spool&lt;br /&gt;
8.0K    tmp&lt;br /&gt;
64K     vz&lt;br /&gt;
0       vzagent&lt;br /&gt;
0       vzagent.tmp&lt;br /&gt;
16K     vzquota&lt;br /&gt;
1.2M    www&lt;br /&gt;
20K     yp&lt;br /&gt;
&lt;br /&gt;
Usually, the problem is in /var/spool or /var/log, due to unread mail&lt;br /&gt;
or excessive log files.  You can continue to drill down by doing&lt;br /&gt;
a &amp;quot;cd &amp;lt;sbdirectory&amp;gt;&amp;quot; and another &amp;quot;du -hs *&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2231</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2231"/>
		<updated>2020-06-09T21:59:04Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Free up space on backup1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== Check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers.  To get into the servers, login to&lt;br /&gt;
admin-1.nethere.net and su - to root.&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
To clean up outgoing queues of unwanted SPAM on relay-1 and relay-2.&lt;br /&gt;
&lt;br /&gt;
  mail_cleanup [ &amp;lt;sender&#039;s domain/username/message id&amp;gt; ... ]&lt;br /&gt;
&lt;br /&gt;
To just remove emails from some senders.&lt;br /&gt;
&lt;br /&gt;
  rmmails &amp;lt;sender&#039;s domain/username/message id&amp;gt; ...&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
=== Cacti ===&lt;br /&gt;
&lt;br /&gt;
Check bandwidth usage on servers&lt;br /&gt;
&lt;br /&gt;
  https://cacti.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on Other Servers ==&lt;br /&gt;
&lt;br /&gt;
Many servers start to run out of disk space over time.  Often it is caused by unread mail for&lt;br /&gt;
root or log files.  &lt;br /&gt;
&lt;br /&gt;
To find the source of the problems, you use &amp;quot;du&amp;quot; to find where the disk space is being used.&lt;br /&gt;
You can&#039;t do a du on /proc or /dev, si I use the command&lt;br /&gt;
&lt;br /&gt;
[root@virt11 /]# du -hs [a-c]* deprecated [e-o]* [q-u]* var | tee duhs0&lt;br /&gt;
&lt;br /&gt;
which produces something like this.&lt;br /&gt;
&lt;br /&gt;
4.0K    backup&lt;br /&gt;
4.0K    backup1&lt;br /&gt;
4.0K    backup2&lt;br /&gt;
4.0K    backup3&lt;br /&gt;
4.0K    backup4&lt;br /&gt;
7.5M    bin&lt;br /&gt;
47M     boot&lt;br /&gt;
4.0K    deprecated&lt;br /&gt;
92M     etc&lt;br /&gt;
30M     home&lt;br /&gt;
8.0K    initrd&lt;br /&gt;
541M    lib&lt;br /&gt;
16K     lost+found&lt;br /&gt;
8.0K    media&lt;br /&gt;
0       misc&lt;br /&gt;
8.0K    mnt&lt;br /&gt;
0       net&lt;br /&gt;
92M     opt&lt;br /&gt;
336K    root&lt;br /&gt;
36M     sbin&lt;br /&gt;
8.0K    selinux&lt;br /&gt;
8.0K    srv&lt;br /&gt;
0       sys&lt;br /&gt;
4.0K    test&lt;br /&gt;
16K     tmp&lt;br /&gt;
1.2G    usr&lt;br /&gt;
583M    var&lt;br /&gt;
&lt;br /&gt;
In this case it looks like /var is the problem, so&lt;br /&gt;
&lt;br /&gt;
cd /var&lt;br /&gt;
du -hs * | tee duhs9&lt;br /&gt;
&lt;br /&gt;
Produces&lt;br /&gt;
&lt;br /&gt;
12K     account&lt;br /&gt;
2.6M    analog-5.32&lt;br /&gt;
63M     cache&lt;br /&gt;
24K     db&lt;br /&gt;
4.0K    duhs&lt;br /&gt;
4.0K    duhs1&lt;br /&gt;
4.0K    duhs2&lt;br /&gt;
4.0K    duhs3&lt;br /&gt;
4.0K    duhs4&lt;br /&gt;
4.0K    duhs5&lt;br /&gt;
4.0K    duhs6&lt;br /&gt;
4.0K    duhs7&lt;br /&gt;
4.0K    duhs8&lt;br /&gt;
32K     empty&lt;br /&gt;
8.0K    games&lt;br /&gt;
16K     kerberos&lt;br /&gt;
42M     lib&lt;br /&gt;
8.0K    local&lt;br /&gt;
36K     lock&lt;br /&gt;
457M    log&lt;br /&gt;
0       mail&lt;br /&gt;
8.0K    nis&lt;br /&gt;
8.0K    opt&lt;br /&gt;
8.0K    preserve&lt;br /&gt;
8.0K    racoon&lt;br /&gt;
240K    run&lt;br /&gt;
18M     spool&lt;br /&gt;
8.0K    tmp&lt;br /&gt;
64K     vz&lt;br /&gt;
0       vzagent&lt;br /&gt;
0       vzagent.tmp&lt;br /&gt;
16K     vzquota&lt;br /&gt;
1.2M    www&lt;br /&gt;
20K     yp&lt;br /&gt;
&lt;br /&gt;
Usually, the problem is in /var/spool or /var/log, due to unread mail&lt;br /&gt;
or excessive log files.  You can continue to drill down by doing&lt;br /&gt;
a &amp;quot;cd &amp;lt;sbdirectory&amp;gt;&amp;quot; and another &amp;quot;du -hs *&amp;quot;.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2230</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2230"/>
		<updated>2020-04-30T20:30:49Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Install a FreeBSD VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a Linux VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  Make sure the customer acct (col0XXXX) is&lt;br /&gt;
in a group to allow sudo.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a FreeBSD VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will &lt;br /&gt;
create a new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer &lt;br /&gt;
account (col0XXXX) as well. Make sure the customer acct (col0XXXX) is in a &lt;br /&gt;
group to allow sudo. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti restart a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reboot &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2229</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2229"/>
		<updated>2020-04-30T20:28:58Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Install a Linux VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a Linux VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  Make sure the customer acct (col0XXXX) is&lt;br /&gt;
in a group to allow sudo.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a FreeBSD VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will &lt;br /&gt;
create a new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti restart a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reboot &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2228</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2228"/>
		<updated>2020-04-30T20:27:03Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Install a Linux VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a Linux VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a FreeBSD VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will &lt;br /&gt;
create a new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti restart a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reboot &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2227</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2227"/>
		<updated>2020-04-30T20:23:19Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Install a VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a Linux VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will create a &lt;br /&gt;
new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti restart a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reboot &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2226</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2226"/>
		<updated>2020-04-30T20:21:50Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti shutdown a VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will create a &lt;br /&gt;
new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti restart a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reboot &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2225</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2225"/>
		<updated>2020-04-30T20:17:10Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Re-Intsall a VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will create a &lt;br /&gt;
new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti shutdown a VPS ==&lt;br /&gt;
&lt;br /&gt;
To shutdown a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To permanently delete a ganeti VPS&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance remove &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2224</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2224"/>
		<updated>2020-04-03T17:25:57Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Re-Insall a VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will create a &lt;br /&gt;
new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Intsall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
 https://69.55.229.8/index.php/New_Signups#Dedicated_Server_Install_Checklist&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2223</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2223"/>
		<updated>2020-04-03T17:25:02Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti Insall a VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Install a VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will create a &lt;br /&gt;
new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Insall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2222</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2222"/>
		<updated>2020-04-03T17:22:57Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Ganeti VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
== Ganeti Insall a VPS ==&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will create a &lt;br /&gt;
new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ganeti Re-Insall a VPS ==&lt;br /&gt;
&lt;br /&gt;
Re-installing the OS will destroy all data on the VPS.&lt;br /&gt;
Login to the Ganeti master node (currently gn1).&lt;br /&gt;
&lt;br /&gt;
To reinstall the OS on a VPS the VPS must be shutdown.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance shutdown &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Do the reinstall.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance reinstall &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gnt-instance startup &amp;lt;VPSname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you need to do the customization for the VPS.  This is&lt;br /&gt;
basically the same as the Dedicated server customization.&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=New_Signups&amp;diff=2221</id>
		<title>New Signups</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=New_Signups&amp;diff=2221"/>
		<updated>2020-04-02T22:20:45Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Dedicated Server Install Checklist */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
TODO- review&lt;br /&gt;
&lt;br /&gt;
= Open source discount signups =&lt;br /&gt;
&lt;br /&gt;
URLS to give to customers:&lt;br /&gt;
https://secure.johncompanies.com/signup/step1.html?svc=bsd-oss&lt;br /&gt;
&lt;br /&gt;
https://secure.johncompanies.com/signup/step1.html?svc=linux-oss&lt;br /&gt;
&lt;br /&gt;
= New Signups = &lt;br /&gt;
New customers sign up for service on our web based forms at&lt;br /&gt;
www.johncompanies.com &lt;br /&gt;
&lt;br /&gt;
When a signup occurs an email is sent to support@johncompanies.com &lt;br /&gt;
and signups get written to two files:&lt;br /&gt;
&lt;br /&gt;
/usr/local/www/jc_pub/data/pending&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
/usr/local/www/jc_pub/data/log/log.pending&lt;br /&gt;
&lt;br /&gt;
The second file is simply a running log of signups, and should not be used for anything.  The only time it is handy is if you are editing the pending file, then save it, and are told that the file has changed since you started editing it.  This means that someone signed up while you were editing the pending file.  What you should do is force the save (thus losing their signup) and then cut and paste the lost signup from /htdocs/colocation/data/log/log.pending.&lt;br /&gt;
&lt;br /&gt;
Lines in the pending file look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;2005-11-08;newcastle;Frederick Wilson;;1405 South Adams;Fort Worth;Texas;76104;US;referred by nmrc.org;LM-1;2;Limit 10;1;207.13.31.48;106&lt;br /&gt;
vaifan@airmail.net;817-798-8637;on;on;&lt;br /&gt;
new;add2;;;replace;replace;preserve;elaine.commadev.com&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fields are defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;signup date;hostname;last, first;company; address;city;state;zip;country;referred by;package id;bandwidth overage option;if gobut is selected, $ amount to limit is listed here;payment method;source ip;template id;&lt;br /&gt;
email;phone #;admin;billing;alt&lt;br /&gt;
new/replace system;create new account/add to existing;hostname of server being replaced;new IP pref;update/merge contacts;update/merge owner info;update/preserve traffic overage;&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
When a customer signs up an email will be sent to support@johncompanies.com containing the log entry in the format above&lt;br /&gt;
&lt;br /&gt;
The process for setting up a new customer is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Go to the pending customers screen (choose Mgmt site -&amp;gt; New Signups). New customers will be at the bottom of the &amp;quot;VPS Signups&amp;quot; and &amp;quot;Colo Signups&amp;quot; sections. If they’ve paid automatically (via PayPal) the word “PAID” will appear in the “pmt status” column. The other thing you’ll see in this column is “paid- not cleared”- this means they paid with eCheck (via PayPal) and the check has not cleared. This normally takes 3 biz. days and we wait till it clears and shows “PAID” before setting up the server. Customers who pay with a credit card will not appear as &amp;quot;PAID&amp;quot; here. In that case, an email will be sent to support@johncompanies.com indicating a payment has been made (or failed). Once the payment has been paid/cleared, click “process….” &lt;br /&gt;
&lt;br /&gt;
Before setting someone up, you need to decide whether the signup is fraudulent. Examples of fraud can be found in  /usr/local/www/jc_pub/data/fraud-examples &lt;br /&gt;
Generally, &lt;br /&gt;
* NOT FRAUD = non-anonymous email accounts that match who they are (dan@brockman.com, where the person is Dan  Brockman)&lt;br /&gt;
* NOT FRAUD = referred by a customer that actually exists or by kuro5hin&lt;br /&gt;
* MAYBE FRAUD = referred by ‘web search’ or ‘google’&lt;br /&gt;
* MAYBE FRAUD = generic sounding address&lt;br /&gt;
* MAYBE FRAUD = IP address is out of country and address is in US (use dnsstuff link on main signups screen to lookup where IP is)&lt;br /&gt;
* FRAUD = they don’t type in: referred by and/or hostname and/or give the default answer for bandwidth is stop&lt;br /&gt;
* FRAUD = multiple orders each from the same IP and/or using same email address&lt;br /&gt;
* FRAUD = no hostname or hacker-sounding hostname: 3v1l0n3 (evilone)&lt;br /&gt;
&lt;br /&gt;
If you’re ever in doubt, call the card owner using the phone number given at signup or call the bank number provided for the card. Never use the email as it’s likely the thief’s and not the cardholder’s.&lt;br /&gt;
&lt;br /&gt;
Almost all the info needed is pre-entered into this screen, but some fields will need attention:&lt;br /&gt;
&lt;br /&gt;
* System: The system is already selected based on the template/OS the customer selected. &lt;br /&gt;
* Directory: information provided for you to enter once the system is created. (leave bank for managedcolo)&lt;br /&gt;
* Disk: indicates how much disk space the server should have. (leave bank for managedcolo)&lt;br /&gt;
* Hostname: self-explanatory. &lt;br /&gt;
* veid: (linux systems only) should be filled in with the significant digits of the customer ID, ex: col01340 = veid 1340. &lt;br /&gt;
* os: indicates which OS the customer wants (feed to linux vm script) &lt;br /&gt;
* ip(s): for linux customers, you should click “ipmap” and scroll down to the first available (green) ip amongst others on the same system. Click on the ip to copy it back to the “ip(s)” field on the form. For FreeBSD customers, the ips available for use are already assigned to the system and should be copied back to this screen once the jail is made (see below). For colo customers, choose an IP appropriate to the data center. i.e. for i2b pick an IP from the 229 block.&lt;br /&gt;
* start date: should reflect the day the system was created. &lt;br /&gt;
* asset tag: colo only&lt;br /&gt;
* password: VPS only&lt;br /&gt;
* cabinet: colo only&lt;br /&gt;
* ats port: colo @ i2b only&lt;br /&gt;
* monitored: should only be checked if the system’s ip/services were supplied to castle to place on the monitor (probes) list. &lt;br /&gt;
&lt;br /&gt;
NOTE: you must use a JavaScript enabled browser to enter new customers otherwise you won’t be given correct options for os templates.&lt;br /&gt;
&lt;br /&gt;
3. create the new system using either [[VPS_Management#jailmake|jailmake]] or [[VPS_Management#vm|vm]] scripts. For posterity, on our older linux systems and older OS&#039;s (pre virt17) we used to use a custom script for each OS version: vemakecentos3  vemakedebian40 vemakedebian30 vemakedebian31 vemakefedora2   vemakefedora6  vemakefedora7  vemakesuse100   vemakesuse93   vemakecentos4   vemakedebian31  vemakefedora4   vemakerh9       vemakeubuntu5   vemakefedora    vemakefedora5   vemakesuse      vemakeubuntu606  vemakeubuntu610 vemakeubuntu704&lt;br /&gt;
&lt;br /&gt;
[[VPS_Management#jailmake|jailmake]] and [[VPS_Management#vm|vm]] both email the new customer their welcome email.&lt;br /&gt;
&lt;br /&gt;
When you are done adding a customer, for both systems: copy back the dir and password (supplied by the VPS make script) into the form. For FreeBSD, copy back the IP which the make script will give back to you.&lt;br /&gt;
&lt;br /&gt;
When you are done filling out all the fields in the pending customer form, click “Activate”. This will create the customer in our database and remove them from the pending list. Nothing is emailed to the customer as a result of this action. If the customer paid via credit card, their info needs to be added manually.&lt;br /&gt;
&lt;br /&gt;
Discussion about choosing IPs ...&lt;br /&gt;
&lt;br /&gt;
When you make a new system, you have to choose what IP to give it.  New FreeBSD servers are configured with a set amount of IPs, and as you add new systems to that freebsd server you can use one of the IPs assigned to the host, but not currently assigned to a VPS.  You can use the js program to see what IPs are available for assigning to new customers. &lt;br /&gt;
&lt;br /&gt;
However, it is not that simple with the linux systems. The linux servers do not get the IPs of their customer systems bound to the actual machine. That is, even on a fully loaded linux system, if you run `ifconfig -a` from the base machine, you only see one IP - the main IP of that system. Further complicating matters is that linux systems can bind multiple IPs - therefore it is not possible to know that the next new linux system should just have the next IP as the last new one that was created.&lt;br /&gt;
&lt;br /&gt;
So, what we do is, new linux systems are simply assigned a starting IP, and no new machine (freebsd or linux) is assigned a base IP anywhere within 92-128 Ips of that IP.  So you have 92-128 IPs to grow with for the linux systems that will live on that machine.&lt;br /&gt;
&lt;br /&gt;
So, you simply start creating systems from that starting point – and additional IPs that those systems need should also be added from that range, and even other linux systems on other virt machines should get new IPs from that block of IPs. To find the block of IPs in use, go thru bash history to  find last ip used or assigned to a ve. Then, click the “ipmap” link (or pull up Mgmt. -&amp;gt; Reference -&amp;gt; IP Map).  Ip’s not already assigned to a machine will show up as green. In general, we want to use ips which have never been assigned, or were relenquishsed years ago (or at least 30 days ago – IMPORTANT!!). &lt;br /&gt;
D&lt;br /&gt;
If a customer has selected an IP package – is paying for extra IPs now, then we’re pretty much obliged to assign all those IPs now. Otherwise, we only ever assign 1 IP.&lt;br /&gt;
&lt;br /&gt;
= The Welcome Emails =&lt;br /&gt;
&lt;br /&gt;
[[VPS_Management#jailmake|jailmake]] and [[VPS_Management#jailmake|vm]], as one of its arguments, asks for an email address.  This email address is used to send a welcome email to the new user.  However, the welcome email is not inside the jailmake/vemake script, and they are not on the jails themselves either.  Here are all the various welcome emails we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/www/jc_pub/data/welcome-freebsdp (dynamic freebsd welcome email, 7.x 8.x)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-linux (dynamic linux welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-debian (dynamic debian/ubuntu welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-fedora (dynamic fedora/centos welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd (original, generic freebsd welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd6 (for freebsd6)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd7z (for freebsd7 with zfs DEPRECATED)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on the main johncompanies server.&lt;br /&gt;
&lt;br /&gt;
When the jailmake script is run, it issues the `fetch` (or wget) command to retrieve it, i.e.&lt;br /&gt;
http://www.johncompanies.com/colocation/data/welcome-freebsdp&lt;br /&gt;
&lt;br /&gt;
and saves it as a temp file, mails it off.&lt;br /&gt;
&lt;br /&gt;
Welcome emails not noted as dynamic above, are emailed with the IP appended at the top to the email address specified. The password to these accounts was/is generic- not very good. Welcome emails noted as dynamic are parsed by the make script to include the IP and a generated, random root password. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: when creating an older-OS ve on a virt- this almost never happens (i.e. it’s created using the old-style vemakexxxx command), you must use the support address as the email address as the make script will be pulling down the dynamic welcome email when it expects to see the old, generic format and it won’t look good. When you receive the welcome email in support, format properly and resend to customer.&lt;br /&gt;
&lt;br /&gt;
This means three things:&lt;br /&gt;
&lt;br /&gt;
a) In order for jailmake to work, the johncompanies web server needs to be up and running, and that file needs to exist at that URL&lt;br /&gt;
&lt;br /&gt;
b) If you want to edit the welcome email, you only need to edit it on the web server, in one place&lt;br /&gt;
&lt;br /&gt;
c) If a customer for some reason does not receive the welcome email, then you need to go Mgmt -&amp;gt; Reference -&amp;gt; (file), copy and paste it into an email to them. For the old-generic emails, make sure to add this line to the very top:&lt;br /&gt;
&lt;br /&gt;
IP: (their IP)&lt;br /&gt;
&lt;br /&gt;
If the welcome email does not exist, jailmake will still work, in the sense that it will create the system, but no welcome email will be sent.&lt;br /&gt;
&lt;br /&gt;
= Dedicated Server Setup =&lt;br /&gt;
&lt;br /&gt;
Before starting the OS install you&#039;ll need to know the following pieces of info:&lt;br /&gt;
&lt;br /&gt;
*CID:&lt;br /&gt;
*Customer has multiple colos: Y/N&lt;br /&gt;
*Asset tag: JC-xxxx&lt;br /&gt;
*rack/location: &lt;br /&gt;
*Service/Package (and any deviations to B/W, nfs space, IPs, etc) and price:&lt;br /&gt;
*RAM in system:&lt;br /&gt;
*IPs included in plan:&lt;br /&gt;
*OS (32 vs 64bit variant):&lt;br /&gt;
*Hostname:&lt;br /&gt;
*Disk partitions, including swap space:&lt;br /&gt;
*Number of initial IPs to assign:&lt;br /&gt;
*Timezone:&lt;br /&gt;
&lt;br /&gt;
Some to most of that info will be provided via the new signup page, assuming the customer ordered the server via our order page. The rest will comes from the sales/build department.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the server is built, it should be installed in the rack, booted to the BIOS screen and labeled with the asset tag and the customer&#039;s CID (if available/established).&lt;br /&gt;
&lt;br /&gt;
If an [[IPKVM]] is not already attached, you will need to ascertain which one is available (look in ~user/kvm, usually open for editing in the p4 screen of the mailbox window).&lt;br /&gt;
&lt;br /&gt;
Before loading the server, there&#039;s a couple things to do in the BIOS screen:&lt;br /&gt;
# make sure the date and time are set to UTC&lt;br /&gt;
# make sure the server&#039;s power restore action is set to: last state (basically what we&#039;re trying to do here is if the server has power pulled, when power is restored it should turn back on- assuming it was on when power was pulled. This is what allows our ATS power cycling to work. If this is not set, when the ATS port is turned off and turned back on, the server may not turn back on.&lt;br /&gt;
# set the boot order to disable network booting or other things that may slow down bootup. &lt;br /&gt;
&lt;br /&gt;
After saving and exiting the BIOS, if the server has a raid card installed, you will see the raid BIOS screen. You should enter this screen and setup a raid mirror, or whatever the customer has requested (if a special request was made sales will let you know). Usually we use a 3ware card, to enter the config screen, press ALT-3. Use spacebar to select the 2 drives, tab to &#039;create unit&#039;. Use defaults. Create a raid1 mirror (unless customer asks for something else). Do not enable write cache, assuming no battery exists. F8 to save and exit.&lt;br /&gt;
&lt;br /&gt;
Reboot and load the OS- follow the instructions for pulling the ISO into the IPKVM and booting to it [[IPKVM|here]].&lt;br /&gt;
&lt;br /&gt;
The install should be for the server version of whatever OS was requested. We typically do not install anything other than an sshd (and ports (tree) for FreeBSD). We don&#039;t install a GUI environment. We do not setup auto updates. We do not encrypt home directories. &lt;br /&gt;
&lt;br /&gt;
When selecting and IP, take into consideration the data center: an install at i2b should use an IP from the IPs routed to i2b, and vice versa. A customer&#039;s package may come with multiple IPs, however unless they indicate they want more assigned initially (and show good reason for doing so) we only assign 1 IP. If they are to receive more IPs, you will/may need to configure those post-install (depending on the OS). Ideally all IPs they receive are on the same class C block.&lt;br /&gt;
&lt;br /&gt;
When picking a NIC (most of the installs we do are network versions and require the network to download OS components), it&#039;s somewhat of a crap shoot as to which NIC to choose and configure. Usually we try to begin pinging the IP we assign and watch for it to respond immediately after configuring the NIC. If it does not ping, you can:&lt;br /&gt;
# ask the NOC to swap the network cable to the alternate port&lt;br /&gt;
# go back and configure the alternate NIC (you may have to restart the OS install if it doesn&#039;t properly unconfigure the initial NIC, or you can&#039;t change it&#039;s config to alleviate an IP conflict)&lt;br /&gt;
&lt;br /&gt;
We set DNS: 69.55.229.3, 69.55.225.225 (for a server @ i2b), 69.55.225.225, 69.55.230.3 (for a server @ castle)&lt;br /&gt;
&lt;br /&gt;
We setup a regular user account: &#039;user&#039; and we set that password to &#039;newrootNNNN&#039; where NNNN is the sig digits of the customer&#039;s CID. i.e. for col01233 the password is &#039;newroot1233&#039;. In FreeBSD, make sure to add &#039;user&#039; to the wheel group (member of). &lt;br /&gt;
If a root password is allowed to be set, we use the same password: &#039;newrootNNNN&#039; &lt;br /&gt;
&lt;br /&gt;
Once the OS is installed and you&#039;ve rebooted (after disconnecting the ISO from the KVM) you should confirm all is working: swap, ram, disk partitions, network is as it should be. If you were only able to assign 1 DNS server in setup, please add another to /etc/resolv.conf:&lt;br /&gt;
&lt;br /&gt;
@i2b: 69.55.229.3, 69.55.225.225&lt;br /&gt;
@castle: 69.55.225.225, 69.55.230.3&lt;br /&gt;
&lt;br /&gt;
Assuming they have a 3ware raid card, you will need to transfer in the raid CLI tool and our handy script so the customer may check on the health of their raid array, as instructed/encouraged by the welcome email they will receive.&lt;br /&gt;
&lt;br /&gt;
This package is located on mail;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fetch http://johncompanies.com/tools/3ware/L64.tgz&lt;br /&gt;
tar xvzf L64.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The version you choose will of course depend on the OS/version you&#039;re installing. You should copy the tarball to the /usr/local/sbin dir (or something in their path) and untar&#039;d. You should run the script included in the tarball called checkraid.sh. If it gives an error, this is due to the fact that on some servers the raid card is recognized as &#039;c1&#039; and on others &#039;c0&#039; or something different. Determine which it is for this server by running:&lt;br /&gt;
 tw_cli info&lt;br /&gt;
&lt;br /&gt;
which will give you output like:&lt;br /&gt;
&amp;lt;pre&amp;gt;Ctl   Model        (V)Ports  Drives   Units   NotOpt  RRate   VRate  BBU&lt;br /&gt;
------------------------------------------------------------------------&lt;br /&gt;
c1    9650SE-8LPML 8         6        1       0       5       1      -&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example it&#039;s &#039;c1&#039; so edit checkraid.sh and change all instances of &#039;c0&#039; to &#039;c1&#039;&lt;br /&gt;
&lt;br /&gt;
If we&#039;re adding additional IPs (and were not able to do so during initial install), configure as follows:&lt;br /&gt;
&lt;br /&gt;
Centos/Fedora:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 cd /etc/sysconfig/network-scripts/&lt;br /&gt;
 cp -p ifcfg-eth* ~&lt;br /&gt;
 fetch http://johncompanies.com/tools/network/centos/ifcfg-eth.tgz&lt;br /&gt;
 tar xvzf ifcfg-eth.tgz&lt;br /&gt;
&lt;br /&gt;
 vi ifcfg-eth0:&lt;br /&gt;
&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
TYPE=Ethernet&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
NM_CONTROLLED=yes&lt;br /&gt;
BOOTPROTO=none&lt;br /&gt;
IPADDR=69.55.231.XXX  # change this to main IP address&lt;br /&gt;
PREFIX=24&lt;br /&gt;
GATEWAY=69.55.231.1   # change this to 229.1 if IP addr in other block&lt;br /&gt;
DNS1=69.55.229.3&lt;br /&gt;
DNS2=69.55.225.225&lt;br /&gt;
DNS3=69.55.230.3&lt;br /&gt;
DOMAIN=&amp;quot;johncompanies.com&amp;quot;&lt;br /&gt;
DEFROUTE=yes&lt;br /&gt;
IPV4_FAILURE_FATAL=yes&lt;br /&gt;
IPV6INIT=no&lt;br /&gt;
NAME=&amp;quot;System eth0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  vi  eth0:*&lt;br /&gt;
&lt;br /&gt;
DEVICE=eth0:0&lt;br /&gt;
TYPE=Ethernet&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
NM_CONTROLLED=yes&lt;br /&gt;
BOOTPROTO=none&lt;br /&gt;
IPADDR=69.55.231.XXX  # change this to IP address&lt;br /&gt;
PREFIX=24&lt;br /&gt;
IPV4_FAILURE_FATAL=yes&lt;br /&gt;
IPV6INIT=no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Repeat above as needed for each IP address&lt;br /&gt;
&lt;br /&gt;
Remove the un-needed files.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm ifcfg-eth0:[3-8]  # change numbers to fit how many IP addresses are needed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debian/Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /etc/network&lt;br /&gt;
vi interfaces&lt;br /&gt;
&lt;br /&gt;
Old file was:&lt;br /&gt;
# The loopback network interface&lt;br /&gt;
auto lo&lt;br /&gt;
iface lo inet loopback&lt;br /&gt;
&lt;br /&gt;
# The primary network interface&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.227.4&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55.225.225&lt;br /&gt;
        dns-search yeppernet.com&lt;br /&gt;
&lt;br /&gt;
new file is:&lt;br /&gt;
# The loopback network interface&lt;br /&gt;
auto lo&lt;br /&gt;
iface lo inet loopback&lt;br /&gt;
&lt;br /&gt;
# The primary network interface&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.227.44&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55D.225.225&lt;br /&gt;
        dns-search yeppernet.com&lt;br /&gt;
&lt;br /&gt;
auto eth0:0&lt;br /&gt;
iface eth0:0 inet static&lt;br /&gt;
        address 69.55.227.55&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
  /etc/init.d/networking restart&lt;br /&gt;
(make sure you have console in case you screw that up)&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;pre&amp;gt;ifconfig&lt;br /&gt;
eth0      Link encap:Ethernet  HWaddr 00:30:48:28:9d:50&lt;br /&gt;
          inet addr:69.55.227.44  Bcast:69.55.227.255  Mask:255.255.255.0&lt;br /&gt;
          inet6 addr: fe80::230:48ff:fe28:9d50/64 Scope:Link&lt;br /&gt;
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;br /&gt;
          RX packets:2850149 errors:0 dropped:0 overruns:0 frame:0&lt;br /&gt;
          TX packets:2595566 errors:0 dropped:0 overruns:0 carrier:0&lt;br /&gt;
          collisions:0 txqueuelen:100&lt;br /&gt;
          RX bytes:1193068127 (1.1 GB)  TX bytes:2046984105 (1.9 GB)&lt;br /&gt;
          Base address:0x3000 Memory:fc400000-fc420000&lt;br /&gt;
&lt;br /&gt;
eth0:0    Link encap:Ethernet  HWaddr 00:30:48:28:9d:50&lt;br /&gt;
          inet addr:69.55.227.55  Bcast:69.55.227.255  Mask:255.255.255.0&lt;br /&gt;
          inet6 addr: fe80::230:48ff:fe28:9d50/64 Scope:Link&lt;br /&gt;
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;br /&gt;
          RX packets:2850149 errors:0 dropped:0 overruns:0 frame:0&lt;br /&gt;
          TX packets:2595566 errors:0 dropped:0 overruns:0 carrier:0&lt;br /&gt;
          collisions:0 txqueuelen:100&lt;br /&gt;
          RX bytes:1193068127 (1.1 GB)  TX bytes:2046984105 (1.9 GB)&lt;br /&gt;
          Base address:0x3000 Memory:fc400000-fc420000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ubuntu 18.04&lt;br /&gt;
&lt;br /&gt;
Ubuntu 18.04 uses netplan instead of /etc/network/interfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/neplan/50-cloud-init.yaml&lt;br /&gt;
&lt;br /&gt;
# This file is generated from information provided by&lt;br /&gt;
# the datasource.  Changes to it will not persist across an instance.&lt;br /&gt;
# To disable cloud-init&#039;s network configuration capabilities, write a file&lt;br /&gt;
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:&lt;br /&gt;
# network: {config: disabled}&lt;br /&gt;
network:&lt;br /&gt;
  version: 2&lt;br /&gt;
  renderer: networkd&lt;br /&gt;
  ethernets:&lt;br /&gt;
    enp0s25:&lt;br /&gt;
     dhcp4: no&lt;br /&gt;
     addresses: [69.55.229.26/24]&lt;br /&gt;
     gateway4: 69.55.229.1  &lt;br /&gt;
     nameservers:&lt;br /&gt;
       addresses: [69.55.229.3,8.8.8.8,8.8.4.4]&lt;br /&gt;
&lt;br /&gt;
netplan apply&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The very last thing you should do before logging off the server is to run:&lt;br /&gt;
 history -c&lt;br /&gt;
in every shell you were in so they don&#039;t see what we were doing :)&lt;br /&gt;
&lt;br /&gt;
Assuming this server is connected to an ATS, you&#039;ll want to confirm the server is connected to the port you think it is, and test the functionality- that you can power cycle the port and the server will come back up when power is restored. You should do a test power cycle (via the mgmt or AM-based ATS control) while the server is in post or BIOS. Basically, just NOT while the OS is running and disks mounted.&lt;br /&gt;
&lt;br /&gt;
You should confirm the switch ports to which the server is connected (maybe has a 2nd port for the RMM). While the server is rebooting during your ATS test, you can observe output on the switch console that looks like:&lt;br /&gt;
&amp;lt;pre&amp;gt;.Mar  7 09:47:45 PST: %LINK-3-UPDOWN: Interface FastEthernet0/14, changed state to down&lt;br /&gt;
.Mar  7 09:47:46 PST: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/14, changed state to down&lt;br /&gt;
.Mar  7 09:47:47 PST: %LINK-3-UPDOWN: Interface FastEthernet0/14, changed state to up&lt;br /&gt;
.Mar  7 09:47:48 PST: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/14, changed state to up&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms the server is on port 14.&lt;br /&gt;
&lt;br /&gt;
Take this info (ATS, switch ports plus the asset tag) and:&lt;br /&gt;
# activate (or add) the server. Update the cabinet, ATS and port, asset tag. (Switch port not kept in mgmt)&lt;br /&gt;
# update the [[cabinetmap|cabinet map]] to add the server to the correct location, along with the ATS port, and switch port(s)&lt;br /&gt;
# login to the switch serial console and add the customer&#039;s CID (and hostname if they have mult. servers) to the port(s) description tag. Repeat for the RMM port if they have one:&lt;br /&gt;
&amp;lt;pre&amp;gt;switch-p25#conf t&lt;br /&gt;
Enter configuration commands, one per line.  End with CNTL/Z.&lt;br /&gt;
switch-p25(config)#int fa0/10&lt;br /&gt;
switch-p25(config-if)#des col01233 myhostname&lt;br /&gt;
switch-p25(config)#int fa0/11&lt;br /&gt;
switch-p25(config-if)#des col01233 myhostname (RMM)&lt;br /&gt;
switch-p25(config-if)#end&lt;br /&gt;
switch-p25#&lt;br /&gt;
.Mar  7 11:37:17 PST: %SYS-5-CONFIG_I: Configured from console by console&lt;br /&gt;
switch-p25#wr me&lt;br /&gt;
Building configuration...&lt;br /&gt;
[OK]&lt;br /&gt;
switch-p25#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point it&#039;s safe to hand the server over to the customer. Use the &#039;new colo welcome&#039; paste. Edit for:&lt;br /&gt;
# IP(s)&lt;br /&gt;
# if it&#039;s FreeBSD use the &#039;root&#039; &amp;amp; &#039;user&#039; password line, if it&#039;s Ubuntu use/edit the &#039;root&#039; password line, depending on if you set the root password or not, or what you setup for the normal user account&lt;br /&gt;
# the correct DNS depending on which data center they&#039;re at (remove ns3c if the server is at castle)&lt;br /&gt;
# if they have no raid card, remove that section&lt;br /&gt;
# if they have an RMM, include the RMM paste somewhere in the email&lt;br /&gt;
&lt;br /&gt;
If they&#039;re already a customer, you can exclude the info about their AM login info (since they have it already). If they&#039;re not, you&#039;ll need to reset their AM pass to see it.&lt;br /&gt;
&lt;br /&gt;
Lastly, assuming this server was installed for a new customer and the customer was not yet activated, the CID was probably not established and the server has no CID label. You&#039;ll need to make a note to label it (and add the hostname too if they have multiple servers) next time you&#039;re at the data center, or ask the noc staff to do it for you.&lt;br /&gt;
&lt;br /&gt;
== Dedicated Server Install Checklist ==&lt;br /&gt;
&lt;br /&gt;
o Install OS&lt;br /&gt;
&lt;br /&gt;
o Add JCI - adduser, wheel, visudo&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  adduser -u 955 jci&lt;br /&gt;
  passwd jci&lt;br /&gt;
    &#039;&#039;l4mph0st&#039;&#039;&lt;br /&gt;
  adduser -u 1000 col0XXX&lt;br /&gt;
  passwd col0XXXX           &lt;br /&gt;
    &#039;&#039;col0XXXX&#039;&#039;          (XXXX is their col0XXXX number)&lt;br /&gt;
  vi /etc/group&lt;br /&gt;
     ( add &amp;quot;jci,user&amp;quot; to group wheel and/or sudo )&lt;br /&gt;
  visudo&lt;br /&gt;
     ( enable group wheel to do anything with password )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o Install openssh-clients (if not already installed)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  yum -y install openssh-clients&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o Update OS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  yum -y update   (for CentOS)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o checkraid.sh&lt;br /&gt;
  As noted above install our RAID disk checks&lt;br /&gt;
&lt;br /&gt;
o config IP addresses&lt;br /&gt;
&lt;br /&gt;
o clear last logins&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /dev/null /var/log/wtmp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o clear history&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /dev/null ~/.bash_history&lt;br /&gt;
  history -c&lt;br /&gt;
  shutdown -h now&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o For a suspected spammer, add rule to firewall2 to limit output for mail ports (ie.)&lt;br /&gt;
before rule 50&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ipfw add 37 pipe 8 ip from 69.55.231.104/30 to any dst-port 25,465,587&lt;br /&gt;
ipfw add 37 pipe 8 ip from 69.55.231.108    to any dst-port 25,465,587&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== After installed in rack ===&lt;br /&gt;
&lt;br /&gt;
  check ssh&lt;br /&gt;
  label switch port&lt;br /&gt;
  update cabinet map&lt;br /&gt;
  update mgmt cabinet number&lt;br /&gt;
&lt;br /&gt;
== RMM Setup ==&lt;br /&gt;
&lt;br /&gt;
The RMM must be configured via the server&#039;s BIOS screen:&lt;br /&gt;
&lt;br /&gt;
Server management &amp;gt; BMC LAN config:&amp;lt;BR&amp;gt;&lt;br /&gt;
down to Intel RMM3 LAN config: &amp;lt;BR&amp;gt;&lt;br /&gt;
IP (static) , mask, g/w&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
user config: &amp;lt;BR&amp;gt;&lt;br /&gt;
disable anonymous. &amp;lt;BR&amp;gt;&lt;br /&gt;
root: set pass to newrootNNNN&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=New_Signups&amp;diff=2220</id>
		<title>New Signups</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=New_Signups&amp;diff=2220"/>
		<updated>2020-03-29T19:01:32Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Dedicated Server Install Checklist */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
TODO- review&lt;br /&gt;
&lt;br /&gt;
= Open source discount signups =&lt;br /&gt;
&lt;br /&gt;
URLS to give to customers:&lt;br /&gt;
https://secure.johncompanies.com/signup/step1.html?svc=bsd-oss&lt;br /&gt;
&lt;br /&gt;
https://secure.johncompanies.com/signup/step1.html?svc=linux-oss&lt;br /&gt;
&lt;br /&gt;
= New Signups = &lt;br /&gt;
New customers sign up for service on our web based forms at&lt;br /&gt;
www.johncompanies.com &lt;br /&gt;
&lt;br /&gt;
When a signup occurs an email is sent to support@johncompanies.com &lt;br /&gt;
and signups get written to two files:&lt;br /&gt;
&lt;br /&gt;
/usr/local/www/jc_pub/data/pending&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
/usr/local/www/jc_pub/data/log/log.pending&lt;br /&gt;
&lt;br /&gt;
The second file is simply a running log of signups, and should not be used for anything.  The only time it is handy is if you are editing the pending file, then save it, and are told that the file has changed since you started editing it.  This means that someone signed up while you were editing the pending file.  What you should do is force the save (thus losing their signup) and then cut and paste the lost signup from /htdocs/colocation/data/log/log.pending.&lt;br /&gt;
&lt;br /&gt;
Lines in the pending file look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;2005-11-08;newcastle;Frederick Wilson;;1405 South Adams;Fort Worth;Texas;76104;US;referred by nmrc.org;LM-1;2;Limit 10;1;207.13.31.48;106&lt;br /&gt;
vaifan@airmail.net;817-798-8637;on;on;&lt;br /&gt;
new;add2;;;replace;replace;preserve;elaine.commadev.com&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fields are defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;signup date;hostname;last, first;company; address;city;state;zip;country;referred by;package id;bandwidth overage option;if gobut is selected, $ amount to limit is listed here;payment method;source ip;template id;&lt;br /&gt;
email;phone #;admin;billing;alt&lt;br /&gt;
new/replace system;create new account/add to existing;hostname of server being replaced;new IP pref;update/merge contacts;update/merge owner info;update/preserve traffic overage;&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
When a customer signs up an email will be sent to support@johncompanies.com containing the log entry in the format above&lt;br /&gt;
&lt;br /&gt;
The process for setting up a new customer is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Go to the pending customers screen (choose Mgmt site -&amp;gt; New Signups). New customers will be at the bottom of the &amp;quot;VPS Signups&amp;quot; and &amp;quot;Colo Signups&amp;quot; sections. If they’ve paid automatically (via PayPal) the word “PAID” will appear in the “pmt status” column. The other thing you’ll see in this column is “paid- not cleared”- this means they paid with eCheck (via PayPal) and the check has not cleared. This normally takes 3 biz. days and we wait till it clears and shows “PAID” before setting up the server. Customers who pay with a credit card will not appear as &amp;quot;PAID&amp;quot; here. In that case, an email will be sent to support@johncompanies.com indicating a payment has been made (or failed). Once the payment has been paid/cleared, click “process….” &lt;br /&gt;
&lt;br /&gt;
Before setting someone up, you need to decide whether the signup is fraudulent. Examples of fraud can be found in  /usr/local/www/jc_pub/data/fraud-examples &lt;br /&gt;
Generally, &lt;br /&gt;
* NOT FRAUD = non-anonymous email accounts that match who they are (dan@brockman.com, where the person is Dan  Brockman)&lt;br /&gt;
* NOT FRAUD = referred by a customer that actually exists or by kuro5hin&lt;br /&gt;
* MAYBE FRAUD = referred by ‘web search’ or ‘google’&lt;br /&gt;
* MAYBE FRAUD = generic sounding address&lt;br /&gt;
* MAYBE FRAUD = IP address is out of country and address is in US (use dnsstuff link on main signups screen to lookup where IP is)&lt;br /&gt;
* FRAUD = they don’t type in: referred by and/or hostname and/or give the default answer for bandwidth is stop&lt;br /&gt;
* FRAUD = multiple orders each from the same IP and/or using same email address&lt;br /&gt;
* FRAUD = no hostname or hacker-sounding hostname: 3v1l0n3 (evilone)&lt;br /&gt;
&lt;br /&gt;
If you’re ever in doubt, call the card owner using the phone number given at signup or call the bank number provided for the card. Never use the email as it’s likely the thief’s and not the cardholder’s.&lt;br /&gt;
&lt;br /&gt;
Almost all the info needed is pre-entered into this screen, but some fields will need attention:&lt;br /&gt;
&lt;br /&gt;
* System: The system is already selected based on the template/OS the customer selected. &lt;br /&gt;
* Directory: information provided for you to enter once the system is created. (leave bank for managedcolo)&lt;br /&gt;
* Disk: indicates how much disk space the server should have. (leave bank for managedcolo)&lt;br /&gt;
* Hostname: self-explanatory. &lt;br /&gt;
* veid: (linux systems only) should be filled in with the significant digits of the customer ID, ex: col01340 = veid 1340. &lt;br /&gt;
* os: indicates which OS the customer wants (feed to linux vm script) &lt;br /&gt;
* ip(s): for linux customers, you should click “ipmap” and scroll down to the first available (green) ip amongst others on the same system. Click on the ip to copy it back to the “ip(s)” field on the form. For FreeBSD customers, the ips available for use are already assigned to the system and should be copied back to this screen once the jail is made (see below). For colo customers, choose an IP appropriate to the data center. i.e. for i2b pick an IP from the 229 block.&lt;br /&gt;
* start date: should reflect the day the system was created. &lt;br /&gt;
* asset tag: colo only&lt;br /&gt;
* password: VPS only&lt;br /&gt;
* cabinet: colo only&lt;br /&gt;
* ats port: colo @ i2b only&lt;br /&gt;
* monitored: should only be checked if the system’s ip/services were supplied to castle to place on the monitor (probes) list. &lt;br /&gt;
&lt;br /&gt;
NOTE: you must use a JavaScript enabled browser to enter new customers otherwise you won’t be given correct options for os templates.&lt;br /&gt;
&lt;br /&gt;
3. create the new system using either [[VPS_Management#jailmake|jailmake]] or [[VPS_Management#vm|vm]] scripts. For posterity, on our older linux systems and older OS&#039;s (pre virt17) we used to use a custom script for each OS version: vemakecentos3  vemakedebian40 vemakedebian30 vemakedebian31 vemakefedora2   vemakefedora6  vemakefedora7  vemakesuse100   vemakesuse93   vemakecentos4   vemakedebian31  vemakefedora4   vemakerh9       vemakeubuntu5   vemakefedora    vemakefedora5   vemakesuse      vemakeubuntu606  vemakeubuntu610 vemakeubuntu704&lt;br /&gt;
&lt;br /&gt;
[[VPS_Management#jailmake|jailmake]] and [[VPS_Management#vm|vm]] both email the new customer their welcome email.&lt;br /&gt;
&lt;br /&gt;
When you are done adding a customer, for both systems: copy back the dir and password (supplied by the VPS make script) into the form. For FreeBSD, copy back the IP which the make script will give back to you.&lt;br /&gt;
&lt;br /&gt;
When you are done filling out all the fields in the pending customer form, click “Activate”. This will create the customer in our database and remove them from the pending list. Nothing is emailed to the customer as a result of this action. If the customer paid via credit card, their info needs to be added manually.&lt;br /&gt;
&lt;br /&gt;
Discussion about choosing IPs ...&lt;br /&gt;
&lt;br /&gt;
When you make a new system, you have to choose what IP to give it.  New FreeBSD servers are configured with a set amount of IPs, and as you add new systems to that freebsd server you can use one of the IPs assigned to the host, but not currently assigned to a VPS.  You can use the js program to see what IPs are available for assigning to new customers. &lt;br /&gt;
&lt;br /&gt;
However, it is not that simple with the linux systems. The linux servers do not get the IPs of their customer systems bound to the actual machine. That is, even on a fully loaded linux system, if you run `ifconfig -a` from the base machine, you only see one IP - the main IP of that system. Further complicating matters is that linux systems can bind multiple IPs - therefore it is not possible to know that the next new linux system should just have the next IP as the last new one that was created.&lt;br /&gt;
&lt;br /&gt;
So, what we do is, new linux systems are simply assigned a starting IP, and no new machine (freebsd or linux) is assigned a base IP anywhere within 92-128 Ips of that IP.  So you have 92-128 IPs to grow with for the linux systems that will live on that machine.&lt;br /&gt;
&lt;br /&gt;
So, you simply start creating systems from that starting point – and additional IPs that those systems need should also be added from that range, and even other linux systems on other virt machines should get new IPs from that block of IPs. To find the block of IPs in use, go thru bash history to  find last ip used or assigned to a ve. Then, click the “ipmap” link (or pull up Mgmt. -&amp;gt; Reference -&amp;gt; IP Map).  Ip’s not already assigned to a machine will show up as green. In general, we want to use ips which have never been assigned, or were relenquishsed years ago (or at least 30 days ago – IMPORTANT!!). &lt;br /&gt;
D&lt;br /&gt;
If a customer has selected an IP package – is paying for extra IPs now, then we’re pretty much obliged to assign all those IPs now. Otherwise, we only ever assign 1 IP.&lt;br /&gt;
&lt;br /&gt;
= The Welcome Emails =&lt;br /&gt;
&lt;br /&gt;
[[VPS_Management#jailmake|jailmake]] and [[VPS_Management#jailmake|vm]], as one of its arguments, asks for an email address.  This email address is used to send a welcome email to the new user.  However, the welcome email is not inside the jailmake/vemake script, and they are not on the jails themselves either.  Here are all the various welcome emails we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/www/jc_pub/data/welcome-freebsdp (dynamic freebsd welcome email, 7.x 8.x)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-linux (dynamic linux welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-debian (dynamic debian/ubuntu welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-fedora (dynamic fedora/centos welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd (original, generic freebsd welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd6 (for freebsd6)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd7z (for freebsd7 with zfs DEPRECATED)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on the main johncompanies server.&lt;br /&gt;
&lt;br /&gt;
When the jailmake script is run, it issues the `fetch` (or wget) command to retrieve it, i.e.&lt;br /&gt;
http://www.johncompanies.com/colocation/data/welcome-freebsdp&lt;br /&gt;
&lt;br /&gt;
and saves it as a temp file, mails it off.&lt;br /&gt;
&lt;br /&gt;
Welcome emails not noted as dynamic above, are emailed with the IP appended at the top to the email address specified. The password to these accounts was/is generic- not very good. Welcome emails noted as dynamic are parsed by the make script to include the IP and a generated, random root password. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: when creating an older-OS ve on a virt- this almost never happens (i.e. it’s created using the old-style vemakexxxx command), you must use the support address as the email address as the make script will be pulling down the dynamic welcome email when it expects to see the old, generic format and it won’t look good. When you receive the welcome email in support, format properly and resend to customer.&lt;br /&gt;
&lt;br /&gt;
This means three things:&lt;br /&gt;
&lt;br /&gt;
a) In order for jailmake to work, the johncompanies web server needs to be up and running, and that file needs to exist at that URL&lt;br /&gt;
&lt;br /&gt;
b) If you want to edit the welcome email, you only need to edit it on the web server, in one place&lt;br /&gt;
&lt;br /&gt;
c) If a customer for some reason does not receive the welcome email, then you need to go Mgmt -&amp;gt; Reference -&amp;gt; (file), copy and paste it into an email to them. For the old-generic emails, make sure to add this line to the very top:&lt;br /&gt;
&lt;br /&gt;
IP: (their IP)&lt;br /&gt;
&lt;br /&gt;
If the welcome email does not exist, jailmake will still work, in the sense that it will create the system, but no welcome email will be sent.&lt;br /&gt;
&lt;br /&gt;
= Dedicated Server Setup =&lt;br /&gt;
&lt;br /&gt;
Before starting the OS install you&#039;ll need to know the following pieces of info:&lt;br /&gt;
&lt;br /&gt;
*CID:&lt;br /&gt;
*Customer has multiple colos: Y/N&lt;br /&gt;
*Asset tag: JC-xxxx&lt;br /&gt;
*rack/location: &lt;br /&gt;
*Service/Package (and any deviations to B/W, nfs space, IPs, etc) and price:&lt;br /&gt;
*RAM in system:&lt;br /&gt;
*IPs included in plan:&lt;br /&gt;
*OS (32 vs 64bit variant):&lt;br /&gt;
*Hostname:&lt;br /&gt;
*Disk partitions, including swap space:&lt;br /&gt;
*Number of initial IPs to assign:&lt;br /&gt;
*Timezone:&lt;br /&gt;
&lt;br /&gt;
Some to most of that info will be provided via the new signup page, assuming the customer ordered the server via our order page. The rest will comes from the sales/build department.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the server is built, it should be installed in the rack, booted to the BIOS screen and labeled with the asset tag and the customer&#039;s CID (if available/established).&lt;br /&gt;
&lt;br /&gt;
If an [[IPKVM]] is not already attached, you will need to ascertain which one is available (look in ~user/kvm, usually open for editing in the p4 screen of the mailbox window).&lt;br /&gt;
&lt;br /&gt;
Before loading the server, there&#039;s a couple things to do in the BIOS screen:&lt;br /&gt;
# make sure the date and time are set to UTC&lt;br /&gt;
# make sure the server&#039;s power restore action is set to: last state (basically what we&#039;re trying to do here is if the server has power pulled, when power is restored it should turn back on- assuming it was on when power was pulled. This is what allows our ATS power cycling to work. If this is not set, when the ATS port is turned off and turned back on, the server may not turn back on.&lt;br /&gt;
# set the boot order to disable network booting or other things that may slow down bootup. &lt;br /&gt;
&lt;br /&gt;
After saving and exiting the BIOS, if the server has a raid card installed, you will see the raid BIOS screen. You should enter this screen and setup a raid mirror, or whatever the customer has requested (if a special request was made sales will let you know). Usually we use a 3ware card, to enter the config screen, press ALT-3. Use spacebar to select the 2 drives, tab to &#039;create unit&#039;. Use defaults. Create a raid1 mirror (unless customer asks for something else). Do not enable write cache, assuming no battery exists. F8 to save and exit.&lt;br /&gt;
&lt;br /&gt;
Reboot and load the OS- follow the instructions for pulling the ISO into the IPKVM and booting to it [[IPKVM|here]].&lt;br /&gt;
&lt;br /&gt;
The install should be for the server version of whatever OS was requested. We typically do not install anything other than an sshd (and ports (tree) for FreeBSD). We don&#039;t install a GUI environment. We do not setup auto updates. We do not encrypt home directories. &lt;br /&gt;
&lt;br /&gt;
When selecting and IP, take into consideration the data center: an install at i2b should use an IP from the IPs routed to i2b, and vice versa. A customer&#039;s package may come with multiple IPs, however unless they indicate they want more assigned initially (and show good reason for doing so) we only assign 1 IP. If they are to receive more IPs, you will/may need to configure those post-install (depending on the OS). Ideally all IPs they receive are on the same class C block.&lt;br /&gt;
&lt;br /&gt;
When picking a NIC (most of the installs we do are network versions and require the network to download OS components), it&#039;s somewhat of a crap shoot as to which NIC to choose and configure. Usually we try to begin pinging the IP we assign and watch for it to respond immediately after configuring the NIC. If it does not ping, you can:&lt;br /&gt;
# ask the NOC to swap the network cable to the alternate port&lt;br /&gt;
# go back and configure the alternate NIC (you may have to restart the OS install if it doesn&#039;t properly unconfigure the initial NIC, or you can&#039;t change it&#039;s config to alleviate an IP conflict)&lt;br /&gt;
&lt;br /&gt;
We set DNS: 69.55.229.3, 69.55.225.225 (for a server @ i2b), 69.55.225.225, 69.55.230.3 (for a server @ castle)&lt;br /&gt;
&lt;br /&gt;
We setup a regular user account: &#039;user&#039; and we set that password to &#039;newrootNNNN&#039; where NNNN is the sig digits of the customer&#039;s CID. i.e. for col01233 the password is &#039;newroot1233&#039;. In FreeBSD, make sure to add &#039;user&#039; to the wheel group (member of). &lt;br /&gt;
If a root password is allowed to be set, we use the same password: &#039;newrootNNNN&#039; &lt;br /&gt;
&lt;br /&gt;
Once the OS is installed and you&#039;ve rebooted (after disconnecting the ISO from the KVM) you should confirm all is working: swap, ram, disk partitions, network is as it should be. If you were only able to assign 1 DNS server in setup, please add another to /etc/resolv.conf:&lt;br /&gt;
&lt;br /&gt;
@i2b: 69.55.229.3, 69.55.225.225&lt;br /&gt;
@castle: 69.55.225.225, 69.55.230.3&lt;br /&gt;
&lt;br /&gt;
Assuming they have a 3ware raid card, you will need to transfer in the raid CLI tool and our handy script so the customer may check on the health of their raid array, as instructed/encouraged by the welcome email they will receive.&lt;br /&gt;
&lt;br /&gt;
This package is located on mail;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fetch http://johncompanies.com/tools/3ware/L64.tgz&lt;br /&gt;
tar xvzf L64.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The version you choose will of course depend on the OS/version you&#039;re installing. You should copy the tarball to the /usr/local/sbin dir (or something in their path) and untar&#039;d. You should run the script included in the tarball called checkraid.sh. If it gives an error, this is due to the fact that on some servers the raid card is recognized as &#039;c1&#039; and on others &#039;c0&#039; or something different. Determine which it is for this server by running:&lt;br /&gt;
 tw_cli info&lt;br /&gt;
&lt;br /&gt;
which will give you output like:&lt;br /&gt;
&amp;lt;pre&amp;gt;Ctl   Model        (V)Ports  Drives   Units   NotOpt  RRate   VRate  BBU&lt;br /&gt;
------------------------------------------------------------------------&lt;br /&gt;
c1    9650SE-8LPML 8         6        1       0       5       1      -&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example it&#039;s &#039;c1&#039; so edit checkraid.sh and change all instances of &#039;c0&#039; to &#039;c1&#039;&lt;br /&gt;
&lt;br /&gt;
If we&#039;re adding additional IPs (and were not able to do so during initial install), configure as follows:&lt;br /&gt;
&lt;br /&gt;
Centos/Fedora:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 cd /etc/sysconfig/network-scripts/&lt;br /&gt;
 cp -p ifcfg-eth* ~&lt;br /&gt;
 fetch http://johncompanies.com/tools/network/centos/ifcfg-eth.tgz&lt;br /&gt;
 tar xvzf ifcfg-eth.tgz&lt;br /&gt;
&lt;br /&gt;
 vi ifcfg-eth0:&lt;br /&gt;
&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
TYPE=Ethernet&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
NM_CONTROLLED=yes&lt;br /&gt;
BOOTPROTO=none&lt;br /&gt;
IPADDR=69.55.231.XXX  # change this to main IP address&lt;br /&gt;
PREFIX=24&lt;br /&gt;
GATEWAY=69.55.231.1   # change this to 229.1 if IP addr in other block&lt;br /&gt;
DNS1=69.55.229.3&lt;br /&gt;
DNS2=69.55.225.225&lt;br /&gt;
DNS3=69.55.230.3&lt;br /&gt;
DOMAIN=&amp;quot;johncompanies.com&amp;quot;&lt;br /&gt;
DEFROUTE=yes&lt;br /&gt;
IPV4_FAILURE_FATAL=yes&lt;br /&gt;
IPV6INIT=no&lt;br /&gt;
NAME=&amp;quot;System eth0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  vi  eth0:*&lt;br /&gt;
&lt;br /&gt;
DEVICE=eth0:0&lt;br /&gt;
TYPE=Ethernet&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
NM_CONTROLLED=yes&lt;br /&gt;
BOOTPROTO=none&lt;br /&gt;
IPADDR=69.55.231.XXX  # change this to IP address&lt;br /&gt;
PREFIX=24&lt;br /&gt;
IPV4_FAILURE_FATAL=yes&lt;br /&gt;
IPV6INIT=no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Repeat above as needed for each IP address&lt;br /&gt;
&lt;br /&gt;
Remove the un-needed files.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm ifcfg-eth0:[3-8]  # change numbers to fit how many IP addresses are needed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debian/Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /etc/network&lt;br /&gt;
vi interfaces&lt;br /&gt;
&lt;br /&gt;
Old file was:&lt;br /&gt;
# The loopback network interface&lt;br /&gt;
auto lo&lt;br /&gt;
iface lo inet loopback&lt;br /&gt;
&lt;br /&gt;
# The primary network interface&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.227.4&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55.225.225&lt;br /&gt;
        dns-search yeppernet.com&lt;br /&gt;
&lt;br /&gt;
new file is:&lt;br /&gt;
# The loopback network interface&lt;br /&gt;
auto lo&lt;br /&gt;
iface lo inet loopback&lt;br /&gt;
&lt;br /&gt;
# The primary network interface&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.227.44&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55D.225.225&lt;br /&gt;
        dns-search yeppernet.com&lt;br /&gt;
&lt;br /&gt;
auto eth0:0&lt;br /&gt;
iface eth0:0 inet static&lt;br /&gt;
        address 69.55.227.55&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
  /etc/init.d/networking restart&lt;br /&gt;
(make sure you have console in case you screw that up)&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;pre&amp;gt;ifconfig&lt;br /&gt;
eth0      Link encap:Ethernet  HWaddr 00:30:48:28:9d:50&lt;br /&gt;
          inet addr:69.55.227.44  Bcast:69.55.227.255  Mask:255.255.255.0&lt;br /&gt;
          inet6 addr: fe80::230:48ff:fe28:9d50/64 Scope:Link&lt;br /&gt;
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;br /&gt;
          RX packets:2850149 errors:0 dropped:0 overruns:0 frame:0&lt;br /&gt;
          TX packets:2595566 errors:0 dropped:0 overruns:0 carrier:0&lt;br /&gt;
          collisions:0 txqueuelen:100&lt;br /&gt;
          RX bytes:1193068127 (1.1 GB)  TX bytes:2046984105 (1.9 GB)&lt;br /&gt;
          Base address:0x3000 Memory:fc400000-fc420000&lt;br /&gt;
&lt;br /&gt;
eth0:0    Link encap:Ethernet  HWaddr 00:30:48:28:9d:50&lt;br /&gt;
          inet addr:69.55.227.55  Bcast:69.55.227.255  Mask:255.255.255.0&lt;br /&gt;
          inet6 addr: fe80::230:48ff:fe28:9d50/64 Scope:Link&lt;br /&gt;
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;br /&gt;
          RX packets:2850149 errors:0 dropped:0 overruns:0 frame:0&lt;br /&gt;
          TX packets:2595566 errors:0 dropped:0 overruns:0 carrier:0&lt;br /&gt;
          collisions:0 txqueuelen:100&lt;br /&gt;
          RX bytes:1193068127 (1.1 GB)  TX bytes:2046984105 (1.9 GB)&lt;br /&gt;
          Base address:0x3000 Memory:fc400000-fc420000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ubuntu 18.04&lt;br /&gt;
&lt;br /&gt;
Ubuntu 18.04 uses netplan instead of /etc/network/interfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/neplan/50-cloud-init.yaml&lt;br /&gt;
&lt;br /&gt;
# This file is generated from information provided by&lt;br /&gt;
# the datasource.  Changes to it will not persist across an instance.&lt;br /&gt;
# To disable cloud-init&#039;s network configuration capabilities, write a file&lt;br /&gt;
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:&lt;br /&gt;
# network: {config: disabled}&lt;br /&gt;
network:&lt;br /&gt;
  version: 2&lt;br /&gt;
  renderer: networkd&lt;br /&gt;
  ethernets:&lt;br /&gt;
    enp0s25:&lt;br /&gt;
     dhcp4: no&lt;br /&gt;
     addresses: [69.55.229.26/24]&lt;br /&gt;
     gateway4: 69.55.229.1  &lt;br /&gt;
     nameservers:&lt;br /&gt;
       addresses: [69.55.229.3,8.8.8.8,8.8.4.4]&lt;br /&gt;
&lt;br /&gt;
netplan apply&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The very last thing you should do before logging off the server is to run:&lt;br /&gt;
 history -c&lt;br /&gt;
in every shell you were in so they don&#039;t see what we were doing :)&lt;br /&gt;
&lt;br /&gt;
Assuming this server is connected to an ATS, you&#039;ll want to confirm the server is connected to the port you think it is, and test the functionality- that you can power cycle the port and the server will come back up when power is restored. You should do a test power cycle (via the mgmt or AM-based ATS control) while the server is in post or BIOS. Basically, just NOT while the OS is running and disks mounted.&lt;br /&gt;
&lt;br /&gt;
You should confirm the switch ports to which the server is connected (maybe has a 2nd port for the RMM). While the server is rebooting during your ATS test, you can observe output on the switch console that looks like:&lt;br /&gt;
&amp;lt;pre&amp;gt;.Mar  7 09:47:45 PST: %LINK-3-UPDOWN: Interface FastEthernet0/14, changed state to down&lt;br /&gt;
.Mar  7 09:47:46 PST: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/14, changed state to down&lt;br /&gt;
.Mar  7 09:47:47 PST: %LINK-3-UPDOWN: Interface FastEthernet0/14, changed state to up&lt;br /&gt;
.Mar  7 09:47:48 PST: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/14, changed state to up&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms the server is on port 14.&lt;br /&gt;
&lt;br /&gt;
Take this info (ATS, switch ports plus the asset tag) and:&lt;br /&gt;
# activate (or add) the server. Update the cabinet, ATS and port, asset tag. (Switch port not kept in mgmt)&lt;br /&gt;
# update the [[cabinetmap|cabinet map]] to add the server to the correct location, along with the ATS port, and switch port(s)&lt;br /&gt;
# login to the switch serial console and add the customer&#039;s CID (and hostname if they have mult. servers) to the port(s) description tag. Repeat for the RMM port if they have one:&lt;br /&gt;
&amp;lt;pre&amp;gt;switch-p25#conf t&lt;br /&gt;
Enter configuration commands, one per line.  End with CNTL/Z.&lt;br /&gt;
switch-p25(config)#int fa0/10&lt;br /&gt;
switch-p25(config-if)#des col01233 myhostname&lt;br /&gt;
switch-p25(config)#int fa0/11&lt;br /&gt;
switch-p25(config-if)#des col01233 myhostname (RMM)&lt;br /&gt;
switch-p25(config-if)#end&lt;br /&gt;
switch-p25#&lt;br /&gt;
.Mar  7 11:37:17 PST: %SYS-5-CONFIG_I: Configured from console by console&lt;br /&gt;
switch-p25#wr me&lt;br /&gt;
Building configuration...&lt;br /&gt;
[OK]&lt;br /&gt;
switch-p25#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point it&#039;s safe to hand the server over to the customer. Use the &#039;new colo welcome&#039; paste. Edit for:&lt;br /&gt;
# IP(s)&lt;br /&gt;
# if it&#039;s FreeBSD use the &#039;root&#039; &amp;amp; &#039;user&#039; password line, if it&#039;s Ubuntu use/edit the &#039;root&#039; password line, depending on if you set the root password or not, or what you setup for the normal user account&lt;br /&gt;
# the correct DNS depending on which data center they&#039;re at (remove ns3c if the server is at castle)&lt;br /&gt;
# if they have no raid card, remove that section&lt;br /&gt;
# if they have an RMM, include the RMM paste somewhere in the email&lt;br /&gt;
&lt;br /&gt;
If they&#039;re already a customer, you can exclude the info about their AM login info (since they have it already). If they&#039;re not, you&#039;ll need to reset their AM pass to see it.&lt;br /&gt;
&lt;br /&gt;
Lastly, assuming this server was installed for a new customer and the customer was not yet activated, the CID was probably not established and the server has no CID label. You&#039;ll need to make a note to label it (and add the hostname too if they have multiple servers) next time you&#039;re at the data center, or ask the noc staff to do it for you.&lt;br /&gt;
&lt;br /&gt;
== Dedicated Server Install Checklist ==&lt;br /&gt;
&lt;br /&gt;
o Install OS&lt;br /&gt;
&lt;br /&gt;
o Add JCI - adduser, wheel, visudo&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  adduser -u 955 jci&lt;br /&gt;
  passwd jci&lt;br /&gt;
    &#039;&#039;l4mph0st&#039;&#039;&lt;br /&gt;
  adduser -u 1000 col0XXX&lt;br /&gt;
  passwd col0XXXX           &lt;br /&gt;
    &#039;&#039;col0XXXX&#039;&#039;          (XXXX is their col0XXXX number)&lt;br /&gt;
  vi /etc/group&lt;br /&gt;
     ( add &amp;quot;jci,user&amp;quot; to group wheel and/or sudo )&lt;br /&gt;
  visudo&lt;br /&gt;
     ( enable group wheel to do anything with password )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o Install openssh-clients (yum install openssh-clients)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  yum -y install openssh-clients&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o Update OS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  yum -y update   (for CentOS)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o checkraid.sh&lt;br /&gt;
  As noted above install our RAID disk checks&lt;br /&gt;
&lt;br /&gt;
o config IP addresses&lt;br /&gt;
&lt;br /&gt;
o clear last logins&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /dev/null /var/log/wtmp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o clear history&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /dev/null ~/.bash_history&lt;br /&gt;
  history -c&lt;br /&gt;
  shutdown -h now&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o For a suspected spammer, add rule to firewall2 to limit output for mail ports (ie.)&lt;br /&gt;
before rule 50&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ipfw add 37 pipe 8 ip from 69.55.231.104/30 to any dst-port 25,465,587&lt;br /&gt;
ipfw add 37 pipe 8 ip from 69.55.231.108    to any dst-port 25,465,587&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== After installed in rack ===&lt;br /&gt;
&lt;br /&gt;
  check ssh&lt;br /&gt;
  label switch port&lt;br /&gt;
  update cabinet map&lt;br /&gt;
  update mgmt cabinet number&lt;br /&gt;
&lt;br /&gt;
== RMM Setup ==&lt;br /&gt;
&lt;br /&gt;
The RMM must be configured via the server&#039;s BIOS screen:&lt;br /&gt;
&lt;br /&gt;
Server management &amp;gt; BMC LAN config:&amp;lt;BR&amp;gt;&lt;br /&gt;
down to Intel RMM3 LAN config: &amp;lt;BR&amp;gt;&lt;br /&gt;
IP (static) , mask, g/w&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
user config: &amp;lt;BR&amp;gt;&lt;br /&gt;
disable anonymous. &amp;lt;BR&amp;gt;&lt;br /&gt;
root: set pass to newrootNNNN&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=New_Signups&amp;diff=2219</id>
		<title>New Signups</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=New_Signups&amp;diff=2219"/>
		<updated>2020-03-29T18:55:44Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Dedicated Server Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
TODO- review&lt;br /&gt;
&lt;br /&gt;
= Open source discount signups =&lt;br /&gt;
&lt;br /&gt;
URLS to give to customers:&lt;br /&gt;
https://secure.johncompanies.com/signup/step1.html?svc=bsd-oss&lt;br /&gt;
&lt;br /&gt;
https://secure.johncompanies.com/signup/step1.html?svc=linux-oss&lt;br /&gt;
&lt;br /&gt;
= New Signups = &lt;br /&gt;
New customers sign up for service on our web based forms at&lt;br /&gt;
www.johncompanies.com &lt;br /&gt;
&lt;br /&gt;
When a signup occurs an email is sent to support@johncompanies.com &lt;br /&gt;
and signups get written to two files:&lt;br /&gt;
&lt;br /&gt;
/usr/local/www/jc_pub/data/pending&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
/usr/local/www/jc_pub/data/log/log.pending&lt;br /&gt;
&lt;br /&gt;
The second file is simply a running log of signups, and should not be used for anything.  The only time it is handy is if you are editing the pending file, then save it, and are told that the file has changed since you started editing it.  This means that someone signed up while you were editing the pending file.  What you should do is force the save (thus losing their signup) and then cut and paste the lost signup from /htdocs/colocation/data/log/log.pending.&lt;br /&gt;
&lt;br /&gt;
Lines in the pending file look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;2005-11-08;newcastle;Frederick Wilson;;1405 South Adams;Fort Worth;Texas;76104;US;referred by nmrc.org;LM-1;2;Limit 10;1;207.13.31.48;106&lt;br /&gt;
vaifan@airmail.net;817-798-8637;on;on;&lt;br /&gt;
new;add2;;;replace;replace;preserve;elaine.commadev.com&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fields are defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;signup date;hostname;last, first;company; address;city;state;zip;country;referred by;package id;bandwidth overage option;if gobut is selected, $ amount to limit is listed here;payment method;source ip;template id;&lt;br /&gt;
email;phone #;admin;billing;alt&lt;br /&gt;
new/replace system;create new account/add to existing;hostname of server being replaced;new IP pref;update/merge contacts;update/merge owner info;update/preserve traffic overage;&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
When a customer signs up an email will be sent to support@johncompanies.com containing the log entry in the format above&lt;br /&gt;
&lt;br /&gt;
The process for setting up a new customer is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Go to the pending customers screen (choose Mgmt site -&amp;gt; New Signups). New customers will be at the bottom of the &amp;quot;VPS Signups&amp;quot; and &amp;quot;Colo Signups&amp;quot; sections. If they’ve paid automatically (via PayPal) the word “PAID” will appear in the “pmt status” column. The other thing you’ll see in this column is “paid- not cleared”- this means they paid with eCheck (via PayPal) and the check has not cleared. This normally takes 3 biz. days and we wait till it clears and shows “PAID” before setting up the server. Customers who pay with a credit card will not appear as &amp;quot;PAID&amp;quot; here. In that case, an email will be sent to support@johncompanies.com indicating a payment has been made (or failed). Once the payment has been paid/cleared, click “process….” &lt;br /&gt;
&lt;br /&gt;
Before setting someone up, you need to decide whether the signup is fraudulent. Examples of fraud can be found in  /usr/local/www/jc_pub/data/fraud-examples &lt;br /&gt;
Generally, &lt;br /&gt;
* NOT FRAUD = non-anonymous email accounts that match who they are (dan@brockman.com, where the person is Dan  Brockman)&lt;br /&gt;
* NOT FRAUD = referred by a customer that actually exists or by kuro5hin&lt;br /&gt;
* MAYBE FRAUD = referred by ‘web search’ or ‘google’&lt;br /&gt;
* MAYBE FRAUD = generic sounding address&lt;br /&gt;
* MAYBE FRAUD = IP address is out of country and address is in US (use dnsstuff link on main signups screen to lookup where IP is)&lt;br /&gt;
* FRAUD = they don’t type in: referred by and/or hostname and/or give the default answer for bandwidth is stop&lt;br /&gt;
* FRAUD = multiple orders each from the same IP and/or using same email address&lt;br /&gt;
* FRAUD = no hostname or hacker-sounding hostname: 3v1l0n3 (evilone)&lt;br /&gt;
&lt;br /&gt;
If you’re ever in doubt, call the card owner using the phone number given at signup or call the bank number provided for the card. Never use the email as it’s likely the thief’s and not the cardholder’s.&lt;br /&gt;
&lt;br /&gt;
Almost all the info needed is pre-entered into this screen, but some fields will need attention:&lt;br /&gt;
&lt;br /&gt;
* System: The system is already selected based on the template/OS the customer selected. &lt;br /&gt;
* Directory: information provided for you to enter once the system is created. (leave bank for managedcolo)&lt;br /&gt;
* Disk: indicates how much disk space the server should have. (leave bank for managedcolo)&lt;br /&gt;
* Hostname: self-explanatory. &lt;br /&gt;
* veid: (linux systems only) should be filled in with the significant digits of the customer ID, ex: col01340 = veid 1340. &lt;br /&gt;
* os: indicates which OS the customer wants (feed to linux vm script) &lt;br /&gt;
* ip(s): for linux customers, you should click “ipmap” and scroll down to the first available (green) ip amongst others on the same system. Click on the ip to copy it back to the “ip(s)” field on the form. For FreeBSD customers, the ips available for use are already assigned to the system and should be copied back to this screen once the jail is made (see below). For colo customers, choose an IP appropriate to the data center. i.e. for i2b pick an IP from the 229 block.&lt;br /&gt;
* start date: should reflect the day the system was created. &lt;br /&gt;
* asset tag: colo only&lt;br /&gt;
* password: VPS only&lt;br /&gt;
* cabinet: colo only&lt;br /&gt;
* ats port: colo @ i2b only&lt;br /&gt;
* monitored: should only be checked if the system’s ip/services were supplied to castle to place on the monitor (probes) list. &lt;br /&gt;
&lt;br /&gt;
NOTE: you must use a JavaScript enabled browser to enter new customers otherwise you won’t be given correct options for os templates.&lt;br /&gt;
&lt;br /&gt;
3. create the new system using either [[VPS_Management#jailmake|jailmake]] or [[VPS_Management#vm|vm]] scripts. For posterity, on our older linux systems and older OS&#039;s (pre virt17) we used to use a custom script for each OS version: vemakecentos3  vemakedebian40 vemakedebian30 vemakedebian31 vemakefedora2   vemakefedora6  vemakefedora7  vemakesuse100   vemakesuse93   vemakecentos4   vemakedebian31  vemakefedora4   vemakerh9       vemakeubuntu5   vemakefedora    vemakefedora5   vemakesuse      vemakeubuntu606  vemakeubuntu610 vemakeubuntu704&lt;br /&gt;
&lt;br /&gt;
[[VPS_Management#jailmake|jailmake]] and [[VPS_Management#vm|vm]] both email the new customer their welcome email.&lt;br /&gt;
&lt;br /&gt;
When you are done adding a customer, for both systems: copy back the dir and password (supplied by the VPS make script) into the form. For FreeBSD, copy back the IP which the make script will give back to you.&lt;br /&gt;
&lt;br /&gt;
When you are done filling out all the fields in the pending customer form, click “Activate”. This will create the customer in our database and remove them from the pending list. Nothing is emailed to the customer as a result of this action. If the customer paid via credit card, their info needs to be added manually.&lt;br /&gt;
&lt;br /&gt;
Discussion about choosing IPs ...&lt;br /&gt;
&lt;br /&gt;
When you make a new system, you have to choose what IP to give it.  New FreeBSD servers are configured with a set amount of IPs, and as you add new systems to that freebsd server you can use one of the IPs assigned to the host, but not currently assigned to a VPS.  You can use the js program to see what IPs are available for assigning to new customers. &lt;br /&gt;
&lt;br /&gt;
However, it is not that simple with the linux systems. The linux servers do not get the IPs of their customer systems bound to the actual machine. That is, even on a fully loaded linux system, if you run `ifconfig -a` from the base machine, you only see one IP - the main IP of that system. Further complicating matters is that linux systems can bind multiple IPs - therefore it is not possible to know that the next new linux system should just have the next IP as the last new one that was created.&lt;br /&gt;
&lt;br /&gt;
So, what we do is, new linux systems are simply assigned a starting IP, and no new machine (freebsd or linux) is assigned a base IP anywhere within 92-128 Ips of that IP.  So you have 92-128 IPs to grow with for the linux systems that will live on that machine.&lt;br /&gt;
&lt;br /&gt;
So, you simply start creating systems from that starting point – and additional IPs that those systems need should also be added from that range, and even other linux systems on other virt machines should get new IPs from that block of IPs. To find the block of IPs in use, go thru bash history to  find last ip used or assigned to a ve. Then, click the “ipmap” link (or pull up Mgmt. -&amp;gt; Reference -&amp;gt; IP Map).  Ip’s not already assigned to a machine will show up as green. In general, we want to use ips which have never been assigned, or were relenquishsed years ago (or at least 30 days ago – IMPORTANT!!). &lt;br /&gt;
D&lt;br /&gt;
If a customer has selected an IP package – is paying for extra IPs now, then we’re pretty much obliged to assign all those IPs now. Otherwise, we only ever assign 1 IP.&lt;br /&gt;
&lt;br /&gt;
= The Welcome Emails =&lt;br /&gt;
&lt;br /&gt;
[[VPS_Management#jailmake|jailmake]] and [[VPS_Management#jailmake|vm]], as one of its arguments, asks for an email address.  This email address is used to send a welcome email to the new user.  However, the welcome email is not inside the jailmake/vemake script, and they are not on the jails themselves either.  Here are all the various welcome emails we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/www/jc_pub/data/welcome-freebsdp (dynamic freebsd welcome email, 7.x 8.x)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-linux (dynamic linux welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-debian (dynamic debian/ubuntu welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-fedora (dynamic fedora/centos welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd (original, generic freebsd welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd6 (for freebsd6)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd7z (for freebsd7 with zfs DEPRECATED)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on the main johncompanies server.&lt;br /&gt;
&lt;br /&gt;
When the jailmake script is run, it issues the `fetch` (or wget) command to retrieve it, i.e.&lt;br /&gt;
http://www.johncompanies.com/colocation/data/welcome-freebsdp&lt;br /&gt;
&lt;br /&gt;
and saves it as a temp file, mails it off.&lt;br /&gt;
&lt;br /&gt;
Welcome emails not noted as dynamic above, are emailed with the IP appended at the top to the email address specified. The password to these accounts was/is generic- not very good. Welcome emails noted as dynamic are parsed by the make script to include the IP and a generated, random root password. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: when creating an older-OS ve on a virt- this almost never happens (i.e. it’s created using the old-style vemakexxxx command), you must use the support address as the email address as the make script will be pulling down the dynamic welcome email when it expects to see the old, generic format and it won’t look good. When you receive the welcome email in support, format properly and resend to customer.&lt;br /&gt;
&lt;br /&gt;
This means three things:&lt;br /&gt;
&lt;br /&gt;
a) In order for jailmake to work, the johncompanies web server needs to be up and running, and that file needs to exist at that URL&lt;br /&gt;
&lt;br /&gt;
b) If you want to edit the welcome email, you only need to edit it on the web server, in one place&lt;br /&gt;
&lt;br /&gt;
c) If a customer for some reason does not receive the welcome email, then you need to go Mgmt -&amp;gt; Reference -&amp;gt; (file), copy and paste it into an email to them. For the old-generic emails, make sure to add this line to the very top:&lt;br /&gt;
&lt;br /&gt;
IP: (their IP)&lt;br /&gt;
&lt;br /&gt;
If the welcome email does not exist, jailmake will still work, in the sense that it will create the system, but no welcome email will be sent.&lt;br /&gt;
&lt;br /&gt;
= Dedicated Server Setup =&lt;br /&gt;
&lt;br /&gt;
Before starting the OS install you&#039;ll need to know the following pieces of info:&lt;br /&gt;
&lt;br /&gt;
*CID:&lt;br /&gt;
*Customer has multiple colos: Y/N&lt;br /&gt;
*Asset tag: JC-xxxx&lt;br /&gt;
*rack/location: &lt;br /&gt;
*Service/Package (and any deviations to B/W, nfs space, IPs, etc) and price:&lt;br /&gt;
*RAM in system:&lt;br /&gt;
*IPs included in plan:&lt;br /&gt;
*OS (32 vs 64bit variant):&lt;br /&gt;
*Hostname:&lt;br /&gt;
*Disk partitions, including swap space:&lt;br /&gt;
*Number of initial IPs to assign:&lt;br /&gt;
*Timezone:&lt;br /&gt;
&lt;br /&gt;
Some to most of that info will be provided via the new signup page, assuming the customer ordered the server via our order page. The rest will comes from the sales/build department.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the server is built, it should be installed in the rack, booted to the BIOS screen and labeled with the asset tag and the customer&#039;s CID (if available/established).&lt;br /&gt;
&lt;br /&gt;
If an [[IPKVM]] is not already attached, you will need to ascertain which one is available (look in ~user/kvm, usually open for editing in the p4 screen of the mailbox window).&lt;br /&gt;
&lt;br /&gt;
Before loading the server, there&#039;s a couple things to do in the BIOS screen:&lt;br /&gt;
# make sure the date and time are set to UTC&lt;br /&gt;
# make sure the server&#039;s power restore action is set to: last state (basically what we&#039;re trying to do here is if the server has power pulled, when power is restored it should turn back on- assuming it was on when power was pulled. This is what allows our ATS power cycling to work. If this is not set, when the ATS port is turned off and turned back on, the server may not turn back on.&lt;br /&gt;
# set the boot order to disable network booting or other things that may slow down bootup. &lt;br /&gt;
&lt;br /&gt;
After saving and exiting the BIOS, if the server has a raid card installed, you will see the raid BIOS screen. You should enter this screen and setup a raid mirror, or whatever the customer has requested (if a special request was made sales will let you know). Usually we use a 3ware card, to enter the config screen, press ALT-3. Use spacebar to select the 2 drives, tab to &#039;create unit&#039;. Use defaults. Create a raid1 mirror (unless customer asks for something else). Do not enable write cache, assuming no battery exists. F8 to save and exit.&lt;br /&gt;
&lt;br /&gt;
Reboot and load the OS- follow the instructions for pulling the ISO into the IPKVM and booting to it [[IPKVM|here]].&lt;br /&gt;
&lt;br /&gt;
The install should be for the server version of whatever OS was requested. We typically do not install anything other than an sshd (and ports (tree) for FreeBSD). We don&#039;t install a GUI environment. We do not setup auto updates. We do not encrypt home directories. &lt;br /&gt;
&lt;br /&gt;
When selecting and IP, take into consideration the data center: an install at i2b should use an IP from the IPs routed to i2b, and vice versa. A customer&#039;s package may come with multiple IPs, however unless they indicate they want more assigned initially (and show good reason for doing so) we only assign 1 IP. If they are to receive more IPs, you will/may need to configure those post-install (depending on the OS). Ideally all IPs they receive are on the same class C block.&lt;br /&gt;
&lt;br /&gt;
When picking a NIC (most of the installs we do are network versions and require the network to download OS components), it&#039;s somewhat of a crap shoot as to which NIC to choose and configure. Usually we try to begin pinging the IP we assign and watch for it to respond immediately after configuring the NIC. If it does not ping, you can:&lt;br /&gt;
# ask the NOC to swap the network cable to the alternate port&lt;br /&gt;
# go back and configure the alternate NIC (you may have to restart the OS install if it doesn&#039;t properly unconfigure the initial NIC, or you can&#039;t change it&#039;s config to alleviate an IP conflict)&lt;br /&gt;
&lt;br /&gt;
We set DNS: 69.55.229.3, 69.55.225.225 (for a server @ i2b), 69.55.225.225, 69.55.230.3 (for a server @ castle)&lt;br /&gt;
&lt;br /&gt;
We setup a regular user account: &#039;user&#039; and we set that password to &#039;newrootNNNN&#039; where NNNN is the sig digits of the customer&#039;s CID. i.e. for col01233 the password is &#039;newroot1233&#039;. In FreeBSD, make sure to add &#039;user&#039; to the wheel group (member of). &lt;br /&gt;
If a root password is allowed to be set, we use the same password: &#039;newrootNNNN&#039; &lt;br /&gt;
&lt;br /&gt;
Once the OS is installed and you&#039;ve rebooted (after disconnecting the ISO from the KVM) you should confirm all is working: swap, ram, disk partitions, network is as it should be. If you were only able to assign 1 DNS server in setup, please add another to /etc/resolv.conf:&lt;br /&gt;
&lt;br /&gt;
@i2b: 69.55.229.3, 69.55.225.225&lt;br /&gt;
@castle: 69.55.225.225, 69.55.230.3&lt;br /&gt;
&lt;br /&gt;
Assuming they have a 3ware raid card, you will need to transfer in the raid CLI tool and our handy script so the customer may check on the health of their raid array, as instructed/encouraged by the welcome email they will receive.&lt;br /&gt;
&lt;br /&gt;
This package is located on mail;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fetch http://johncompanies.com/tools/3ware/L64.tgz&lt;br /&gt;
tar xvzf L64.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The version you choose will of course depend on the OS/version you&#039;re installing. You should copy the tarball to the /usr/local/sbin dir (or something in their path) and untar&#039;d. You should run the script included in the tarball called checkraid.sh. If it gives an error, this is due to the fact that on some servers the raid card is recognized as &#039;c1&#039; and on others &#039;c0&#039; or something different. Determine which it is for this server by running:&lt;br /&gt;
 tw_cli info&lt;br /&gt;
&lt;br /&gt;
which will give you output like:&lt;br /&gt;
&amp;lt;pre&amp;gt;Ctl   Model        (V)Ports  Drives   Units   NotOpt  RRate   VRate  BBU&lt;br /&gt;
------------------------------------------------------------------------&lt;br /&gt;
c1    9650SE-8LPML 8         6        1       0       5       1      -&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example it&#039;s &#039;c1&#039; so edit checkraid.sh and change all instances of &#039;c0&#039; to &#039;c1&#039;&lt;br /&gt;
&lt;br /&gt;
If we&#039;re adding additional IPs (and were not able to do so during initial install), configure as follows:&lt;br /&gt;
&lt;br /&gt;
Centos/Fedora:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 cd /etc/sysconfig/network-scripts/&lt;br /&gt;
 cp -p ifcfg-eth* ~&lt;br /&gt;
 fetch http://johncompanies.com/tools/network/centos/ifcfg-eth.tgz&lt;br /&gt;
 tar xvzf ifcfg-eth.tgz&lt;br /&gt;
&lt;br /&gt;
 vi ifcfg-eth0:&lt;br /&gt;
&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
TYPE=Ethernet&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
NM_CONTROLLED=yes&lt;br /&gt;
BOOTPROTO=none&lt;br /&gt;
IPADDR=69.55.231.XXX  # change this to main IP address&lt;br /&gt;
PREFIX=24&lt;br /&gt;
GATEWAY=69.55.231.1   # change this to 229.1 if IP addr in other block&lt;br /&gt;
DNS1=69.55.229.3&lt;br /&gt;
DNS2=69.55.225.225&lt;br /&gt;
DNS3=69.55.230.3&lt;br /&gt;
DOMAIN=&amp;quot;johncompanies.com&amp;quot;&lt;br /&gt;
DEFROUTE=yes&lt;br /&gt;
IPV4_FAILURE_FATAL=yes&lt;br /&gt;
IPV6INIT=no&lt;br /&gt;
NAME=&amp;quot;System eth0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  vi  eth0:*&lt;br /&gt;
&lt;br /&gt;
DEVICE=eth0:0&lt;br /&gt;
TYPE=Ethernet&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
NM_CONTROLLED=yes&lt;br /&gt;
BOOTPROTO=none&lt;br /&gt;
IPADDR=69.55.231.XXX  # change this to IP address&lt;br /&gt;
PREFIX=24&lt;br /&gt;
IPV4_FAILURE_FATAL=yes&lt;br /&gt;
IPV6INIT=no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Repeat above as needed for each IP address&lt;br /&gt;
&lt;br /&gt;
Remove the un-needed files.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm ifcfg-eth0:[3-8]  # change numbers to fit how many IP addresses are needed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debian/Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /etc/network&lt;br /&gt;
vi interfaces&lt;br /&gt;
&lt;br /&gt;
Old file was:&lt;br /&gt;
# The loopback network interface&lt;br /&gt;
auto lo&lt;br /&gt;
iface lo inet loopback&lt;br /&gt;
&lt;br /&gt;
# The primary network interface&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.227.4&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55.225.225&lt;br /&gt;
        dns-search yeppernet.com&lt;br /&gt;
&lt;br /&gt;
new file is:&lt;br /&gt;
# The loopback network interface&lt;br /&gt;
auto lo&lt;br /&gt;
iface lo inet loopback&lt;br /&gt;
&lt;br /&gt;
# The primary network interface&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.227.44&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55D.225.225&lt;br /&gt;
        dns-search yeppernet.com&lt;br /&gt;
&lt;br /&gt;
auto eth0:0&lt;br /&gt;
iface eth0:0 inet static&lt;br /&gt;
        address 69.55.227.55&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
  /etc/init.d/networking restart&lt;br /&gt;
(make sure you have console in case you screw that up)&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;pre&amp;gt;ifconfig&lt;br /&gt;
eth0      Link encap:Ethernet  HWaddr 00:30:48:28:9d:50&lt;br /&gt;
          inet addr:69.55.227.44  Bcast:69.55.227.255  Mask:255.255.255.0&lt;br /&gt;
          inet6 addr: fe80::230:48ff:fe28:9d50/64 Scope:Link&lt;br /&gt;
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;br /&gt;
          RX packets:2850149 errors:0 dropped:0 overruns:0 frame:0&lt;br /&gt;
          TX packets:2595566 errors:0 dropped:0 overruns:0 carrier:0&lt;br /&gt;
          collisions:0 txqueuelen:100&lt;br /&gt;
          RX bytes:1193068127 (1.1 GB)  TX bytes:2046984105 (1.9 GB)&lt;br /&gt;
          Base address:0x3000 Memory:fc400000-fc420000&lt;br /&gt;
&lt;br /&gt;
eth0:0    Link encap:Ethernet  HWaddr 00:30:48:28:9d:50&lt;br /&gt;
          inet addr:69.55.227.55  Bcast:69.55.227.255  Mask:255.255.255.0&lt;br /&gt;
          inet6 addr: fe80::230:48ff:fe28:9d50/64 Scope:Link&lt;br /&gt;
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;br /&gt;
          RX packets:2850149 errors:0 dropped:0 overruns:0 frame:0&lt;br /&gt;
          TX packets:2595566 errors:0 dropped:0 overruns:0 carrier:0&lt;br /&gt;
          collisions:0 txqueuelen:100&lt;br /&gt;
          RX bytes:1193068127 (1.1 GB)  TX bytes:2046984105 (1.9 GB)&lt;br /&gt;
          Base address:0x3000 Memory:fc400000-fc420000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ubuntu 18.04&lt;br /&gt;
&lt;br /&gt;
Ubuntu 18.04 uses netplan instead of /etc/network/interfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/neplan/50-cloud-init.yaml&lt;br /&gt;
&lt;br /&gt;
# This file is generated from information provided by&lt;br /&gt;
# the datasource.  Changes to it will not persist across an instance.&lt;br /&gt;
# To disable cloud-init&#039;s network configuration capabilities, write a file&lt;br /&gt;
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:&lt;br /&gt;
# network: {config: disabled}&lt;br /&gt;
network:&lt;br /&gt;
  version: 2&lt;br /&gt;
  renderer: networkd&lt;br /&gt;
  ethernets:&lt;br /&gt;
    enp0s25:&lt;br /&gt;
     dhcp4: no&lt;br /&gt;
     addresses: [69.55.229.26/24]&lt;br /&gt;
     gateway4: 69.55.229.1  &lt;br /&gt;
     nameservers:&lt;br /&gt;
       addresses: [69.55.229.3,8.8.8.8,8.8.4.4]&lt;br /&gt;
&lt;br /&gt;
netplan apply&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The very last thing you should do before logging off the server is to run:&lt;br /&gt;
 history -c&lt;br /&gt;
in every shell you were in so they don&#039;t see what we were doing :)&lt;br /&gt;
&lt;br /&gt;
Assuming this server is connected to an ATS, you&#039;ll want to confirm the server is connected to the port you think it is, and test the functionality- that you can power cycle the port and the server will come back up when power is restored. You should do a test power cycle (via the mgmt or AM-based ATS control) while the server is in post or BIOS. Basically, just NOT while the OS is running and disks mounted.&lt;br /&gt;
&lt;br /&gt;
You should confirm the switch ports to which the server is connected (maybe has a 2nd port for the RMM). While the server is rebooting during your ATS test, you can observe output on the switch console that looks like:&lt;br /&gt;
&amp;lt;pre&amp;gt;.Mar  7 09:47:45 PST: %LINK-3-UPDOWN: Interface FastEthernet0/14, changed state to down&lt;br /&gt;
.Mar  7 09:47:46 PST: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/14, changed state to down&lt;br /&gt;
.Mar  7 09:47:47 PST: %LINK-3-UPDOWN: Interface FastEthernet0/14, changed state to up&lt;br /&gt;
.Mar  7 09:47:48 PST: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/14, changed state to up&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms the server is on port 14.&lt;br /&gt;
&lt;br /&gt;
Take this info (ATS, switch ports plus the asset tag) and:&lt;br /&gt;
# activate (or add) the server. Update the cabinet, ATS and port, asset tag. (Switch port not kept in mgmt)&lt;br /&gt;
# update the [[cabinetmap|cabinet map]] to add the server to the correct location, along with the ATS port, and switch port(s)&lt;br /&gt;
# login to the switch serial console and add the customer&#039;s CID (and hostname if they have mult. servers) to the port(s) description tag. Repeat for the RMM port if they have one:&lt;br /&gt;
&amp;lt;pre&amp;gt;switch-p25#conf t&lt;br /&gt;
Enter configuration commands, one per line.  End with CNTL/Z.&lt;br /&gt;
switch-p25(config)#int fa0/10&lt;br /&gt;
switch-p25(config-if)#des col01233 myhostname&lt;br /&gt;
switch-p25(config)#int fa0/11&lt;br /&gt;
switch-p25(config-if)#des col01233 myhostname (RMM)&lt;br /&gt;
switch-p25(config-if)#end&lt;br /&gt;
switch-p25#&lt;br /&gt;
.Mar  7 11:37:17 PST: %SYS-5-CONFIG_I: Configured from console by console&lt;br /&gt;
switch-p25#wr me&lt;br /&gt;
Building configuration...&lt;br /&gt;
[OK]&lt;br /&gt;
switch-p25#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point it&#039;s safe to hand the server over to the customer. Use the &#039;new colo welcome&#039; paste. Edit for:&lt;br /&gt;
# IP(s)&lt;br /&gt;
# if it&#039;s FreeBSD use the &#039;root&#039; &amp;amp; &#039;user&#039; password line, if it&#039;s Ubuntu use/edit the &#039;root&#039; password line, depending on if you set the root password or not, or what you setup for the normal user account&lt;br /&gt;
# the correct DNS depending on which data center they&#039;re at (remove ns3c if the server is at castle)&lt;br /&gt;
# if they have no raid card, remove that section&lt;br /&gt;
# if they have an RMM, include the RMM paste somewhere in the email&lt;br /&gt;
&lt;br /&gt;
If they&#039;re already a customer, you can exclude the info about their AM login info (since they have it already). If they&#039;re not, you&#039;ll need to reset their AM pass to see it.&lt;br /&gt;
&lt;br /&gt;
Lastly, assuming this server was installed for a new customer and the customer was not yet activated, the CID was probably not established and the server has no CID label. You&#039;ll need to make a note to label it (and add the hostname too if they have multiple servers) next time you&#039;re at the data center, or ask the noc staff to do it for you.&lt;br /&gt;
&lt;br /&gt;
== Dedicated Server Install Checklist ==&lt;br /&gt;
&lt;br /&gt;
o Install OS&lt;br /&gt;
&lt;br /&gt;
o Add JCI - adduser, wheel, visudo&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  adduser -u 955 jci&lt;br /&gt;
  passwd jci&lt;br /&gt;
    &#039;&#039;l4mph0st&#039;&#039;&lt;br /&gt;
  adduser -u 1000 user&lt;br /&gt;
  passwd user           &lt;br /&gt;
    &#039;&#039;newrootXXXX&#039;&#039;          (XXXX is their col0XXXX number)&lt;br /&gt;
  vi /etc/group&lt;br /&gt;
     ( add &amp;quot;jci,user&amp;quot; to group wheel and/or sudo )&lt;br /&gt;
  visudo&lt;br /&gt;
     ( enable group wheel to do anything with password )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o Install openssh-clients (yum install openssh-clients)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  yum -y install openssh-clients&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o Update OS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  yum -y update   (for CentOS)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o checkraid.sh&lt;br /&gt;
  As noted above install our RAID disk checks&lt;br /&gt;
&lt;br /&gt;
o config IP addresses&lt;br /&gt;
&lt;br /&gt;
o clear last logins&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /dev/null /var/log/wtmp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o clear history&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /dev/null ~/.bash_history&lt;br /&gt;
  history -c&lt;br /&gt;
  shutdown -h now&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o For a suspected spammer, add rule to firewall2 to limit output for mail ports (ie.)&lt;br /&gt;
before rule 50&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ipfw add 37 pipe 8 ip from 69.55.231.104/30 to any dst-port 25,465,587&lt;br /&gt;
ipfw add 37 pipe 8 ip from 69.55.231.108    to any dst-port 25,465,587&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== After installed in rack ===&lt;br /&gt;
&lt;br /&gt;
  check ssh&lt;br /&gt;
  label switch port&lt;br /&gt;
  update cabinet map&lt;br /&gt;
  update mgmt cabinet number&lt;br /&gt;
&lt;br /&gt;
== RMM Setup ==&lt;br /&gt;
&lt;br /&gt;
The RMM must be configured via the server&#039;s BIOS screen:&lt;br /&gt;
&lt;br /&gt;
Server management &amp;gt; BMC LAN config:&amp;lt;BR&amp;gt;&lt;br /&gt;
down to Intel RMM3 LAN config: &amp;lt;BR&amp;gt;&lt;br /&gt;
IP (static) , mask, g/w&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
user config: &amp;lt;BR&amp;gt;&lt;br /&gt;
disable anonymous. &amp;lt;BR&amp;gt;&lt;br /&gt;
root: set pass to newrootNNNN&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=New_Signups&amp;diff=2218</id>
		<title>New Signups</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=New_Signups&amp;diff=2218"/>
		<updated>2020-03-29T18:52:50Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Dedicated Server Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
TODO- review&lt;br /&gt;
&lt;br /&gt;
= Open source discount signups =&lt;br /&gt;
&lt;br /&gt;
URLS to give to customers:&lt;br /&gt;
https://secure.johncompanies.com/signup/step1.html?svc=bsd-oss&lt;br /&gt;
&lt;br /&gt;
https://secure.johncompanies.com/signup/step1.html?svc=linux-oss&lt;br /&gt;
&lt;br /&gt;
= New Signups = &lt;br /&gt;
New customers sign up for service on our web based forms at&lt;br /&gt;
www.johncompanies.com &lt;br /&gt;
&lt;br /&gt;
When a signup occurs an email is sent to support@johncompanies.com &lt;br /&gt;
and signups get written to two files:&lt;br /&gt;
&lt;br /&gt;
/usr/local/www/jc_pub/data/pending&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
/usr/local/www/jc_pub/data/log/log.pending&lt;br /&gt;
&lt;br /&gt;
The second file is simply a running log of signups, and should not be used for anything.  The only time it is handy is if you are editing the pending file, then save it, and are told that the file has changed since you started editing it.  This means that someone signed up while you were editing the pending file.  What you should do is force the save (thus losing their signup) and then cut and paste the lost signup from /htdocs/colocation/data/log/log.pending.&lt;br /&gt;
&lt;br /&gt;
Lines in the pending file look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;2005-11-08;newcastle;Frederick Wilson;;1405 South Adams;Fort Worth;Texas;76104;US;referred by nmrc.org;LM-1;2;Limit 10;1;207.13.31.48;106&lt;br /&gt;
vaifan@airmail.net;817-798-8637;on;on;&lt;br /&gt;
new;add2;;;replace;replace;preserve;elaine.commadev.com&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fields are defined as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;signup date;hostname;last, first;company; address;city;state;zip;country;referred by;package id;bandwidth overage option;if gobut is selected, $ amount to limit is listed here;payment method;source ip;template id;&lt;br /&gt;
email;phone #;admin;billing;alt&lt;br /&gt;
new/replace system;create new account/add to existing;hostname of server being replaced;new IP pref;update/merge contacts;update/merge owner info;update/preserve traffic overage;&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
When a customer signs up an email will be sent to support@johncompanies.com containing the log entry in the format above&lt;br /&gt;
&lt;br /&gt;
The process for setting up a new customer is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Go to the pending customers screen (choose Mgmt site -&amp;gt; New Signups). New customers will be at the bottom of the &amp;quot;VPS Signups&amp;quot; and &amp;quot;Colo Signups&amp;quot; sections. If they’ve paid automatically (via PayPal) the word “PAID” will appear in the “pmt status” column. The other thing you’ll see in this column is “paid- not cleared”- this means they paid with eCheck (via PayPal) and the check has not cleared. This normally takes 3 biz. days and we wait till it clears and shows “PAID” before setting up the server. Customers who pay with a credit card will not appear as &amp;quot;PAID&amp;quot; here. In that case, an email will be sent to support@johncompanies.com indicating a payment has been made (or failed). Once the payment has been paid/cleared, click “process….” &lt;br /&gt;
&lt;br /&gt;
Before setting someone up, you need to decide whether the signup is fraudulent. Examples of fraud can be found in  /usr/local/www/jc_pub/data/fraud-examples &lt;br /&gt;
Generally, &lt;br /&gt;
* NOT FRAUD = non-anonymous email accounts that match who they are (dan@brockman.com, where the person is Dan  Brockman)&lt;br /&gt;
* NOT FRAUD = referred by a customer that actually exists or by kuro5hin&lt;br /&gt;
* MAYBE FRAUD = referred by ‘web search’ or ‘google’&lt;br /&gt;
* MAYBE FRAUD = generic sounding address&lt;br /&gt;
* MAYBE FRAUD = IP address is out of country and address is in US (use dnsstuff link on main signups screen to lookup where IP is)&lt;br /&gt;
* FRAUD = they don’t type in: referred by and/or hostname and/or give the default answer for bandwidth is stop&lt;br /&gt;
* FRAUD = multiple orders each from the same IP and/or using same email address&lt;br /&gt;
* FRAUD = no hostname or hacker-sounding hostname: 3v1l0n3 (evilone)&lt;br /&gt;
&lt;br /&gt;
If you’re ever in doubt, call the card owner using the phone number given at signup or call the bank number provided for the card. Never use the email as it’s likely the thief’s and not the cardholder’s.&lt;br /&gt;
&lt;br /&gt;
Almost all the info needed is pre-entered into this screen, but some fields will need attention:&lt;br /&gt;
&lt;br /&gt;
* System: The system is already selected based on the template/OS the customer selected. &lt;br /&gt;
* Directory: information provided for you to enter once the system is created. (leave bank for managedcolo)&lt;br /&gt;
* Disk: indicates how much disk space the server should have. (leave bank for managedcolo)&lt;br /&gt;
* Hostname: self-explanatory. &lt;br /&gt;
* veid: (linux systems only) should be filled in with the significant digits of the customer ID, ex: col01340 = veid 1340. &lt;br /&gt;
* os: indicates which OS the customer wants (feed to linux vm script) &lt;br /&gt;
* ip(s): for linux customers, you should click “ipmap” and scroll down to the first available (green) ip amongst others on the same system. Click on the ip to copy it back to the “ip(s)” field on the form. For FreeBSD customers, the ips available for use are already assigned to the system and should be copied back to this screen once the jail is made (see below). For colo customers, choose an IP appropriate to the data center. i.e. for i2b pick an IP from the 229 block.&lt;br /&gt;
* start date: should reflect the day the system was created. &lt;br /&gt;
* asset tag: colo only&lt;br /&gt;
* password: VPS only&lt;br /&gt;
* cabinet: colo only&lt;br /&gt;
* ats port: colo @ i2b only&lt;br /&gt;
* monitored: should only be checked if the system’s ip/services were supplied to castle to place on the monitor (probes) list. &lt;br /&gt;
&lt;br /&gt;
NOTE: you must use a JavaScript enabled browser to enter new customers otherwise you won’t be given correct options for os templates.&lt;br /&gt;
&lt;br /&gt;
3. create the new system using either [[VPS_Management#jailmake|jailmake]] or [[VPS_Management#vm|vm]] scripts. For posterity, on our older linux systems and older OS&#039;s (pre virt17) we used to use a custom script for each OS version: vemakecentos3  vemakedebian40 vemakedebian30 vemakedebian31 vemakefedora2   vemakefedora6  vemakefedora7  vemakesuse100   vemakesuse93   vemakecentos4   vemakedebian31  vemakefedora4   vemakerh9       vemakeubuntu5   vemakefedora    vemakefedora5   vemakesuse      vemakeubuntu606  vemakeubuntu610 vemakeubuntu704&lt;br /&gt;
&lt;br /&gt;
[[VPS_Management#jailmake|jailmake]] and [[VPS_Management#vm|vm]] both email the new customer their welcome email.&lt;br /&gt;
&lt;br /&gt;
When you are done adding a customer, for both systems: copy back the dir and password (supplied by the VPS make script) into the form. For FreeBSD, copy back the IP which the make script will give back to you.&lt;br /&gt;
&lt;br /&gt;
When you are done filling out all the fields in the pending customer form, click “Activate”. This will create the customer in our database and remove them from the pending list. Nothing is emailed to the customer as a result of this action. If the customer paid via credit card, their info needs to be added manually.&lt;br /&gt;
&lt;br /&gt;
Discussion about choosing IPs ...&lt;br /&gt;
&lt;br /&gt;
When you make a new system, you have to choose what IP to give it.  New FreeBSD servers are configured with a set amount of IPs, and as you add new systems to that freebsd server you can use one of the IPs assigned to the host, but not currently assigned to a VPS.  You can use the js program to see what IPs are available for assigning to new customers. &lt;br /&gt;
&lt;br /&gt;
However, it is not that simple with the linux systems. The linux servers do not get the IPs of their customer systems bound to the actual machine. That is, even on a fully loaded linux system, if you run `ifconfig -a` from the base machine, you only see one IP - the main IP of that system. Further complicating matters is that linux systems can bind multiple IPs - therefore it is not possible to know that the next new linux system should just have the next IP as the last new one that was created.&lt;br /&gt;
&lt;br /&gt;
So, what we do is, new linux systems are simply assigned a starting IP, and no new machine (freebsd or linux) is assigned a base IP anywhere within 92-128 Ips of that IP.  So you have 92-128 IPs to grow with for the linux systems that will live on that machine.&lt;br /&gt;
&lt;br /&gt;
So, you simply start creating systems from that starting point – and additional IPs that those systems need should also be added from that range, and even other linux systems on other virt machines should get new IPs from that block of IPs. To find the block of IPs in use, go thru bash history to  find last ip used or assigned to a ve. Then, click the “ipmap” link (or pull up Mgmt. -&amp;gt; Reference -&amp;gt; IP Map).  Ip’s not already assigned to a machine will show up as green. In general, we want to use ips which have never been assigned, or were relenquishsed years ago (or at least 30 days ago – IMPORTANT!!). &lt;br /&gt;
D&lt;br /&gt;
If a customer has selected an IP package – is paying for extra IPs now, then we’re pretty much obliged to assign all those IPs now. Otherwise, we only ever assign 1 IP.&lt;br /&gt;
&lt;br /&gt;
= The Welcome Emails =&lt;br /&gt;
&lt;br /&gt;
[[VPS_Management#jailmake|jailmake]] and [[VPS_Management#jailmake|vm]], as one of its arguments, asks for an email address.  This email address is used to send a welcome email to the new user.  However, the welcome email is not inside the jailmake/vemake script, and they are not on the jails themselves either.  Here are all the various welcome emails we have:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/www/jc_pub/data/welcome-freebsdp (dynamic freebsd welcome email, 7.x 8.x)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-linux (dynamic linux welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-debian (dynamic debian/ubuntu welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-fedora (dynamic fedora/centos welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd (original, generic freebsd welcome email)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd6 (for freebsd6)&lt;br /&gt;
/usr/local/www/jc_pub/data/welcome-freebsd7z (for freebsd7 with zfs DEPRECATED)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on the main johncompanies server.&lt;br /&gt;
&lt;br /&gt;
When the jailmake script is run, it issues the `fetch` (or wget) command to retrieve it, i.e.&lt;br /&gt;
http://www.johncompanies.com/colocation/data/welcome-freebsdp&lt;br /&gt;
&lt;br /&gt;
and saves it as a temp file, mails it off.&lt;br /&gt;
&lt;br /&gt;
Welcome emails not noted as dynamic above, are emailed with the IP appended at the top to the email address specified. The password to these accounts was/is generic- not very good. Welcome emails noted as dynamic are parsed by the make script to include the IP and a generated, random root password. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: when creating an older-OS ve on a virt- this almost never happens (i.e. it’s created using the old-style vemakexxxx command), you must use the support address as the email address as the make script will be pulling down the dynamic welcome email when it expects to see the old, generic format and it won’t look good. When you receive the welcome email in support, format properly and resend to customer.&lt;br /&gt;
&lt;br /&gt;
This means three things:&lt;br /&gt;
&lt;br /&gt;
a) In order for jailmake to work, the johncompanies web server needs to be up and running, and that file needs to exist at that URL&lt;br /&gt;
&lt;br /&gt;
b) If you want to edit the welcome email, you only need to edit it on the web server, in one place&lt;br /&gt;
&lt;br /&gt;
c) If a customer for some reason does not receive the welcome email, then you need to go Mgmt -&amp;gt; Reference -&amp;gt; (file), copy and paste it into an email to them. For the old-generic emails, make sure to add this line to the very top:&lt;br /&gt;
&lt;br /&gt;
IP: (their IP)&lt;br /&gt;
&lt;br /&gt;
If the welcome email does not exist, jailmake will still work, in the sense that it will create the system, but no welcome email will be sent.&lt;br /&gt;
&lt;br /&gt;
= Dedicated Server Setup =&lt;br /&gt;
&lt;br /&gt;
Before starting the OS install you&#039;ll need to know the following pieces of info:&lt;br /&gt;
&lt;br /&gt;
*CID:&lt;br /&gt;
*Customer has multiple colos: Y/N&lt;br /&gt;
*Asset tag: JC-xxxx&lt;br /&gt;
*rack/location: &lt;br /&gt;
*Service/Package (and any deviations to B/W, nfs space, IPs, etc) and price:&lt;br /&gt;
*RAM in system:&lt;br /&gt;
*IPs included in plan:&lt;br /&gt;
*OS (32 vs 64bit variant):&lt;br /&gt;
*Hostname:&lt;br /&gt;
*Disk partitions, including swap space:&lt;br /&gt;
*Number of initial IPs to assign:&lt;br /&gt;
*Timezone:&lt;br /&gt;
&lt;br /&gt;
Some to most of that info will be provided via the new signup page, assuming the customer ordered the server via our order page. The rest will comes from the sales/build department.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the server is built, it should be installed in the rack, booted to the BIOS screen and labeled with the asset tag and the customer&#039;s CID (if available/established).&lt;br /&gt;
&lt;br /&gt;
If an [[IPKVM]] is not already attached, you will need to ascertain which one is available (look in ~user/kvm, usually open for editing in the p4 screen of the mailbox window).&lt;br /&gt;
&lt;br /&gt;
Before loading the server, there&#039;s a couple things to do in the BIOS screen:&lt;br /&gt;
# make sure the date and time are set to UTC&lt;br /&gt;
# make sure the server&#039;s power restore action is set to: last state (basically what we&#039;re trying to do here is if the server has power pulled, when power is restored it should turn back on- assuming it was on when power was pulled. This is what allows our ATS power cycling to work. If this is not set, when the ATS port is turned off and turned back on, the server may not turn back on.&lt;br /&gt;
# set the boot order to disable network booting or other things that may slow down bootup. &lt;br /&gt;
&lt;br /&gt;
After saving and exiting the BIOS, if the server has a raid card installed, you will see the raid BIOS screen. You should enter this screen and setup a raid mirror, or whatever the customer has requested (if a special request was made sales will let you know). Usually we use a 3ware card, to enter the config screen, press ALT-3. Use spacebar to select the 2 drives, tab to &#039;create unit&#039;. Use defaults. Create a raid1 mirror (unless customer asks for something else). Do not enable write cache, assuming no battery exists. F8 to save and exit.&lt;br /&gt;
&lt;br /&gt;
Reboot and load the OS- follow the instructions for pulling the ISO into the IPKVM and booting to it [[IPKVM|here]].&lt;br /&gt;
&lt;br /&gt;
The install should be for the server version of whatever OS was requested. We typically do not install anything other than an sshd (and ports (tree) for FreeBSD). We don&#039;t install a GUI environment. We do not setup auto updates. We do not encrypt home directories. &lt;br /&gt;
&lt;br /&gt;
When selecting and IP, take into consideration the data center: an install at i2b should use an IP from the IPs routed to i2b, and vice versa. A customer&#039;s package may come with multiple IPs, however unless they indicate they want more assigned initially (and show good reason for doing so) we only assign 1 IP. If they are to receive more IPs, you will/may need to configure those post-install (depending on the OS). Ideally all IPs they receive are on the same class C block.&lt;br /&gt;
&lt;br /&gt;
When picking a NIC (most of the installs we do are network versions and require the network to download OS components), it&#039;s somewhat of a crap shoot as to which NIC to choose and configure. Usually we try to begin pinging the IP we assign and watch for it to respond immediately after configuring the NIC. If it does not ping, you can:&lt;br /&gt;
# ask the NOC to swap the network cable to the alternate port&lt;br /&gt;
# go back and configure the alternate NIC (you may have to restart the OS install if it doesn&#039;t properly unconfigure the initial NIC, or you can&#039;t change it&#039;s config to alleviate an IP conflict)&lt;br /&gt;
&lt;br /&gt;
We set DNS: 69.55.229.3, 69.55.225.225 (for a server @ i2b), 69.55.225.225, 69.55.230.3 (for a server @ castle)&lt;br /&gt;
&lt;br /&gt;
We setup a regular user account: &#039;user&#039; and we set that password to &#039;newrootNNNN&#039; where NNNN is the sig digits of the customer&#039;s CID. i.e. for col01233 the password is &#039;newroot1233&#039;. In FreeBSD, make sure to add &#039;user&#039; to the wheel group (member of). &lt;br /&gt;
If a root password is allowed to be set, we use the same password: &#039;newrootNNNN&#039; &lt;br /&gt;
&lt;br /&gt;
Once the OS is installed and you&#039;ve rebooted (after disconnecting the ISO from the KVM) you should confirm all is working: swap, ram, disk partitions, network is as it should be. If you were only able to assign 1 DNS server in setup, please add another to /etc/resolv.conf:&lt;br /&gt;
&lt;br /&gt;
@i2b: 69.55.229.3, 69.55.225.225&lt;br /&gt;
@castle: 69.55.225.225, 69.55.230.3&lt;br /&gt;
&lt;br /&gt;
Assuming they have a 3ware raid card, you will need to transfer in the raid CLI tool and our handy script so the customer may check on the health of their raid array, as instructed/encouraged by the welcome email they will receive.&lt;br /&gt;
&lt;br /&gt;
This package is located on mail;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
fetch http://johncompanies.com/tools/3ware/L64.tgz&lt;br /&gt;
tar xvzf L64.tgz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The version you choose will of course depend on the OS/version you&#039;re installing. You should copy the tarball to the /usr/local/sbin dir (or something in their path) and untar&#039;d. You should run the script included in the tarball called checkraid.sh. If it gives an error, this is due to the fact that on some servers the raid card is recognized as &#039;c1&#039; and on others &#039;c0&#039; or something different. Determine which it is for this server by running:&lt;br /&gt;
 tw_cli info&lt;br /&gt;
&lt;br /&gt;
which will give you output like:&lt;br /&gt;
&amp;lt;pre&amp;gt;Ctl   Model        (V)Ports  Drives   Units   NotOpt  RRate   VRate  BBU&lt;br /&gt;
------------------------------------------------------------------------&lt;br /&gt;
c1    9650SE-8LPML 8         6        1       0       5       1      -&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example it&#039;s &#039;c1&#039; so edit checkraid.sh and change all instances of &#039;c0&#039; to &#039;c1&#039;&lt;br /&gt;
&lt;br /&gt;
If we&#039;re adding additional IPs (and were not able to do so during initial install), configure as follows:&lt;br /&gt;
&lt;br /&gt;
Centos/Fedora:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 cd /etc/sysconfig/network-scripts/&lt;br /&gt;
 cp -p ifcfg-eth* ~&lt;br /&gt;
 fetch http://johncompanies.com/tools/network/centos/ifcfg-eth.tgz&lt;br /&gt;
 tar xvzf ifcfg-eth.tgz&lt;br /&gt;
&lt;br /&gt;
 vi ifcfg-eth0:&lt;br /&gt;
&lt;br /&gt;
DEVICE=eth0&lt;br /&gt;
TYPE=Ethernet&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
NM_CONTROLLED=yes&lt;br /&gt;
BOOTPROTO=none&lt;br /&gt;
IPADDR=69.55.231.XXX  # change this to main IP address&lt;br /&gt;
PREFIX=24&lt;br /&gt;
GATEWAY=69.55.231.1   # change this to 229.1 if IP addr in other block&lt;br /&gt;
DNS1=69.55.229.3&lt;br /&gt;
DNS2=69.55.225.225&lt;br /&gt;
DNS3=69.55.230.3&lt;br /&gt;
DOMAIN=&amp;quot;johncompanies.com&amp;quot;&lt;br /&gt;
DEFROUTE=yes&lt;br /&gt;
IPV4_FAILURE_FATAL=yes&lt;br /&gt;
IPV6INIT=no&lt;br /&gt;
NAME=&amp;quot;System eth0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  vi  eth0:*&lt;br /&gt;
&lt;br /&gt;
DEVICE=eth0:0&lt;br /&gt;
TYPE=Ethernet&lt;br /&gt;
ONBOOT=yes&lt;br /&gt;
NM_CONTROLLED=yes&lt;br /&gt;
BOOTPROTO=none&lt;br /&gt;
IPADDR=69.55.231.XXX  # change this to IP address&lt;br /&gt;
PREFIX=24&lt;br /&gt;
IPV4_FAILURE_FATAL=yes&lt;br /&gt;
IPV6INIT=no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Repeat above as needed for each IP address&lt;br /&gt;
&lt;br /&gt;
Remove the un-needed files.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm ifcfg-eth0:[3-8]  # change numbers to fit how many IP addresses are needed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debian/Ubuntu:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /etc/network&lt;br /&gt;
vi interfaces&lt;br /&gt;
&lt;br /&gt;
Old file was:&lt;br /&gt;
# The loopback network interface&lt;br /&gt;
auto lo&lt;br /&gt;
iface lo inet loopback&lt;br /&gt;
&lt;br /&gt;
# The primary network interface&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.227.4&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55.225.225&lt;br /&gt;
        dns-search yeppernet.com&lt;br /&gt;
&lt;br /&gt;
new file is:&lt;br /&gt;
# The loopback network interface&lt;br /&gt;
auto lo&lt;br /&gt;
iface lo inet loopback&lt;br /&gt;
&lt;br /&gt;
# The primary network interface&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.227.44&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55D.225.225&lt;br /&gt;
        dns-search yeppernet.com&lt;br /&gt;
&lt;br /&gt;
auto eth0:0&lt;br /&gt;
iface eth0:0 inet static&lt;br /&gt;
        address 69.55.227.55&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.227.0&lt;br /&gt;
        broadcast 69.55.227.255&lt;br /&gt;
        gateway 69.55.227.1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
  /etc/init.d/networking restart&lt;br /&gt;
(make sure you have console in case you screw that up)&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;pre&amp;gt;ifconfig&lt;br /&gt;
eth0      Link encap:Ethernet  HWaddr 00:30:48:28:9d:50&lt;br /&gt;
          inet addr:69.55.227.44  Bcast:69.55.227.255  Mask:255.255.255.0&lt;br /&gt;
          inet6 addr: fe80::230:48ff:fe28:9d50/64 Scope:Link&lt;br /&gt;
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;br /&gt;
          RX packets:2850149 errors:0 dropped:0 overruns:0 frame:0&lt;br /&gt;
          TX packets:2595566 errors:0 dropped:0 overruns:0 carrier:0&lt;br /&gt;
          collisions:0 txqueuelen:100&lt;br /&gt;
          RX bytes:1193068127 (1.1 GB)  TX bytes:2046984105 (1.9 GB)&lt;br /&gt;
          Base address:0x3000 Memory:fc400000-fc420000&lt;br /&gt;
&lt;br /&gt;
eth0:0    Link encap:Ethernet  HWaddr 00:30:48:28:9d:50&lt;br /&gt;
          inet addr:69.55.227.55  Bcast:69.55.227.255  Mask:255.255.255.0&lt;br /&gt;
          inet6 addr: fe80::230:48ff:fe28:9d50/64 Scope:Link&lt;br /&gt;
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;br /&gt;
          RX packets:2850149 errors:0 dropped:0 overruns:0 frame:0&lt;br /&gt;
          TX packets:2595566 errors:0 dropped:0 overruns:0 carrier:0&lt;br /&gt;
          collisions:0 txqueuelen:100&lt;br /&gt;
          RX bytes:1193068127 (1.1 GB)  TX bytes:2046984105 (1.9 GB)&lt;br /&gt;
          Base address:0x3000 Memory:fc400000-fc420000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ubuntu 18.04&lt;br /&gt;
&lt;br /&gt;
Ubuntu 18.04 uses netplan instead of /etc/network/interfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/neplan/50-cloud-init.yaml&lt;br /&gt;
# This file is generated from information provided by&lt;br /&gt;
# the datasource.  Changes to it will not persist across an instance.&lt;br /&gt;
# To disable cloud-init&#039;s network configuration capabilities, write a file&lt;br /&gt;
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:&lt;br /&gt;
# network: {config: disabled}&lt;br /&gt;
network:&lt;br /&gt;
  version: 2&lt;br /&gt;
  renderer: networkd&lt;br /&gt;
  ethernets:&lt;br /&gt;
    enp0s25:&lt;br /&gt;
     dhcp4: no&lt;br /&gt;
     addresses: [69.55.229.26/24]&lt;br /&gt;
     gateway4: 69.55.229.1  &lt;br /&gt;
     nameservers:&lt;br /&gt;
       addresses: [69.55.229.3,8.8.8.8,8.8.4.4]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The very last thing you should do before logging off the server is to run:&lt;br /&gt;
 history -c&lt;br /&gt;
in every shell you were in so they don&#039;t see what we were doing :)&lt;br /&gt;
&lt;br /&gt;
Assuming this server is connected to an ATS, you&#039;ll want to confirm the server is connected to the port you think it is, and test the functionality- that you can power cycle the port and the server will come back up when power is restored. You should do a test power cycle (via the mgmt or AM-based ATS control) while the server is in post or BIOS. Basically, just NOT while the OS is running and disks mounted.&lt;br /&gt;
&lt;br /&gt;
You should confirm the switch ports to which the server is connected (maybe has a 2nd port for the RMM). While the server is rebooting during your ATS test, you can observe output on the switch console that looks like:&lt;br /&gt;
&amp;lt;pre&amp;gt;.Mar  7 09:47:45 PST: %LINK-3-UPDOWN: Interface FastEthernet0/14, changed state to down&lt;br /&gt;
.Mar  7 09:47:46 PST: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/14, changed state to down&lt;br /&gt;
.Mar  7 09:47:47 PST: %LINK-3-UPDOWN: Interface FastEthernet0/14, changed state to up&lt;br /&gt;
.Mar  7 09:47:48 PST: %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/14, changed state to up&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms the server is on port 14.&lt;br /&gt;
&lt;br /&gt;
Take this info (ATS, switch ports plus the asset tag) and:&lt;br /&gt;
# activate (or add) the server. Update the cabinet, ATS and port, asset tag. (Switch port not kept in mgmt)&lt;br /&gt;
# update the [[cabinetmap|cabinet map]] to add the server to the correct location, along with the ATS port, and switch port(s)&lt;br /&gt;
# login to the switch serial console and add the customer&#039;s CID (and hostname if they have mult. servers) to the port(s) description tag. Repeat for the RMM port if they have one:&lt;br /&gt;
&amp;lt;pre&amp;gt;switch-p25#conf t&lt;br /&gt;
Enter configuration commands, one per line.  End with CNTL/Z.&lt;br /&gt;
switch-p25(config)#int fa0/10&lt;br /&gt;
switch-p25(config-if)#des col01233 myhostname&lt;br /&gt;
switch-p25(config)#int fa0/11&lt;br /&gt;
switch-p25(config-if)#des col01233 myhostname (RMM)&lt;br /&gt;
switch-p25(config-if)#end&lt;br /&gt;
switch-p25#&lt;br /&gt;
.Mar  7 11:37:17 PST: %SYS-5-CONFIG_I: Configured from console by console&lt;br /&gt;
switch-p25#wr me&lt;br /&gt;
Building configuration...&lt;br /&gt;
[OK]&lt;br /&gt;
switch-p25#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point it&#039;s safe to hand the server over to the customer. Use the &#039;new colo welcome&#039; paste. Edit for:&lt;br /&gt;
# IP(s)&lt;br /&gt;
# if it&#039;s FreeBSD use the &#039;root&#039; &amp;amp; &#039;user&#039; password line, if it&#039;s Ubuntu use/edit the &#039;root&#039; password line, depending on if you set the root password or not, or what you setup for the normal user account&lt;br /&gt;
# the correct DNS depending on which data center they&#039;re at (remove ns3c if the server is at castle)&lt;br /&gt;
# if they have no raid card, remove that section&lt;br /&gt;
# if they have an RMM, include the RMM paste somewhere in the email&lt;br /&gt;
&lt;br /&gt;
If they&#039;re already a customer, you can exclude the info about their AM login info (since they have it already). If they&#039;re not, you&#039;ll need to reset their AM pass to see it.&lt;br /&gt;
&lt;br /&gt;
Lastly, assuming this server was installed for a new customer and the customer was not yet activated, the CID was probably not established and the server has no CID label. You&#039;ll need to make a note to label it (and add the hostname too if they have multiple servers) next time you&#039;re at the data center, or ask the noc staff to do it for you.&lt;br /&gt;
&lt;br /&gt;
== Dedicated Server Install Checklist ==&lt;br /&gt;
&lt;br /&gt;
o Install OS&lt;br /&gt;
&lt;br /&gt;
o Add JCI - adduser, wheel, visudo&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  adduser -u 955 jci&lt;br /&gt;
  passwd jci&lt;br /&gt;
    &#039;&#039;l4mph0st&#039;&#039;&lt;br /&gt;
  adduser -u 1000 user&lt;br /&gt;
  passwd user           &lt;br /&gt;
    &#039;&#039;newrootXXXX&#039;&#039;          (XXXX is their col0XXXX number)&lt;br /&gt;
  vi /etc/group&lt;br /&gt;
     ( add &amp;quot;jci,user&amp;quot; to group wheel and/or sudo )&lt;br /&gt;
  visudo&lt;br /&gt;
     ( enable group wheel to do anything with password )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o Install openssh-clients (yum install openssh-clients)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  yum -y install openssh-clients&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o Update OS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  yum -y update   (for CentOS)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o checkraid.sh&lt;br /&gt;
  As noted above install our RAID disk checks&lt;br /&gt;
&lt;br /&gt;
o config IP addresses&lt;br /&gt;
&lt;br /&gt;
o clear last logins&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /dev/null /var/log/wtmp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o clear history&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /dev/null ~/.bash_history&lt;br /&gt;
  history -c&lt;br /&gt;
  shutdown -h now&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
o For a suspected spammer, add rule to firewall2 to limit output for mail ports (ie.)&lt;br /&gt;
before rule 50&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ipfw add 37 pipe 8 ip from 69.55.231.104/30 to any dst-port 25,465,587&lt;br /&gt;
ipfw add 37 pipe 8 ip from 69.55.231.108    to any dst-port 25,465,587&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== After installed in rack ===&lt;br /&gt;
&lt;br /&gt;
  check ssh&lt;br /&gt;
  label switch port&lt;br /&gt;
  update cabinet map&lt;br /&gt;
  update mgmt cabinet number&lt;br /&gt;
&lt;br /&gt;
== RMM Setup ==&lt;br /&gt;
&lt;br /&gt;
The RMM must be configured via the server&#039;s BIOS screen:&lt;br /&gt;
&lt;br /&gt;
Server management &amp;gt; BMC LAN config:&amp;lt;BR&amp;gt;&lt;br /&gt;
down to Intel RMM3 LAN config: &amp;lt;BR&amp;gt;&lt;br /&gt;
IP (static) , mask, g/w&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
user config: &amp;lt;BR&amp;gt;&lt;br /&gt;
disable anonymous. &amp;lt;BR&amp;gt;&lt;br /&gt;
root: set pass to newrootNNNN&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2217</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2217"/>
		<updated>2020-03-24T00:55:42Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* FreeBSD 10.1 VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will create a &lt;br /&gt;
new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS (jail5 and jail6) =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2216</id>
		<title>VPS Management</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=VPS_Management&amp;diff=2216"/>
		<updated>2020-03-24T00:53:29Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Notes on a new FBSD 10.x VPS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Common Problems =&lt;br /&gt;
== Login to any machine without a password ==&lt;br /&gt;
&lt;br /&gt;
This is possible via the use of ssh keys. The process is thus:&lt;br /&gt;
&lt;br /&gt;
1. place the public key for your user (root@mail) in the /root/.ssh/authorized_keys file on the server you wish to login to&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub&lt;br /&gt;
(paste that into authorized_keys on the target server). If the file doesn&#039;t exist, create it.&lt;br /&gt;
&lt;br /&gt;
2. enable root login (usually only applies to FreeBSD). Edit the /etc/ssh/sshd_config on the target server and change:&lt;br /&gt;
&amp;lt;tt&amp;gt;#PermitRootLogin no&amp;lt;/tt&amp;gt;&lt;br /&gt;
to&lt;br /&gt;
&amp;lt;tt&amp;gt;PermitRootLogin yes&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Restart the sshd on the target machine. First, find the sshd process: &lt;br /&gt;
 jailps &amp;lt;hostname&amp;gt; | grep sshd &lt;br /&gt;
or &lt;br /&gt;
 vp &amp;lt;VEID&amp;gt; | grep sshd&lt;br /&gt;
&lt;br /&gt;
Look for the process resembling:&lt;br /&gt;
 root     17296  0.0  0.0  5280 1036 ?        Ss    2011   4:27 /usr/sbin/sshd &lt;br /&gt;
(this is the sshd)&lt;br /&gt;
&lt;br /&gt;
Not:&lt;br /&gt;
 root      6270  0.5  0.0  6808 2536 ?        Ss   14:33   0:00 sshd: root [priv]&lt;br /&gt;
(this is an sshd child- someone already ssh&#039;d in as root)&lt;br /&gt;
&lt;br /&gt;
Restart the sshd: &lt;br /&gt;
 kill -1 &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex:&lt;br /&gt;
 kill -1 17296&lt;br /&gt;
&lt;br /&gt;
You may now ssh in.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re done, IF you enabled root login, you should repeat steps 2 and 3 to disable root logins.&lt;br /&gt;
&lt;br /&gt;
== Letting someone in who has locked themselves out (killed sshd, lost pwd) ==&lt;br /&gt;
&lt;br /&gt;
There are two ways people frequently lock themselves out - either they forget a password, or they kill off sshd somehow.&lt;br /&gt;
&lt;br /&gt;
These are actually both fairly easy to solve.  First, let&#039;s say someone kills off their sshd, or somehow mangles /etc/ssh/sshd_config such that it no longer lets them in.&lt;br /&gt;
&lt;br /&gt;
Their email may be very short, or it may have all sorts of details about how you should fix sshd_config to let them in ... just ignore all of this. They can fix their own mangled sshd.  Fixing this is very simple.  First, edit the /etc/inetd.conf on their system and uncomment the telnet line:&lt;br /&gt;
&lt;br /&gt;
 telnet stream  tcp     nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
 #telnet stream  tcp6    nowait  root    /usr/libexec/telnetd    telnetd&lt;br /&gt;
&lt;br /&gt;
(just leave the tcp6 version of telnet commented)&lt;br /&gt;
&lt;br /&gt;
Then, use jailps to list the processes on their system, and find their inetd process.  Then simply:&lt;br /&gt;
&lt;br /&gt;
 kill -HUP (pid)&lt;br /&gt;
&lt;br /&gt;
where (pid) is the PID of their inetd process.  Now they have telnet running on their system and they can log in and do whatever they need to do.&lt;br /&gt;
&lt;br /&gt;
The only complications that could occur are:&lt;br /&gt;
&lt;br /&gt;
a) their firewall config on our firewall has port 23 blocked, in which case you will need to open that - will be covered in a different lesson.&lt;br /&gt;
&lt;br /&gt;
b) they are not running inetd, so you can&#039;t HUP it.  If this happens, edit their /etc/rc.conf, add the inetd_enable=&amp;quot;YES&amp;quot; line, and then kill&lt;br /&gt;
their jail with /tmp/jailkill.pl - then restart their jail with the jail line from their quad/safe file.  Easy.&lt;br /&gt;
&lt;br /&gt;
If they have forgotten a password,&lt;br /&gt;
&lt;br /&gt;
On 6.x+ you can reset their password with:&lt;br /&gt;
 jexec &amp;lt;jailID from jls&amp;gt; passwd root&lt;br /&gt;
&lt;br /&gt;
Note: the default password for 6.x jails is 8ico2987, for 4.x it is p455agfa&lt;br /&gt;
&lt;br /&gt;
On 4.x, you need to cd to their etc directory&lt;br /&gt;
... for instance:&lt;br /&gt;
&lt;br /&gt;
 cd /mnt/data2/198.78.65.136-col00261-DIR/etc&lt;br /&gt;
&lt;br /&gt;
and run:&lt;br /&gt;
&lt;br /&gt;
 vipw -d .&lt;br /&gt;
&lt;br /&gt;
Then paste in these two lines (theres a paste with these):&lt;br /&gt;
&lt;br /&gt;
 root:$1$krszPxhk$xkCepSnz3mIikT3vCtJCt0:0:0::0:0:Charlie &amp;amp;:/root:/bin/csh&lt;br /&gt;
 user:$1$Mx9p5Npk$QdMU6c8YQqp2FW2M3irEh/:1001:1001::0:0:User &amp;amp;:/home/user:/bin/sh&lt;br /&gt;
&lt;br /&gt;
overwriting the lines they already have for &amp;quot;user&amp;quot; and &amp;quot;root&amp;quot; - then just tell them that both user and root have been reset to the default password of p455agfa.&lt;br /&gt;
&lt;br /&gt;
For linux, just passwd inside shell or &lt;br /&gt;
 vzctl set &amp;lt;veid&amp;gt; --userpasswd root:p455agfa –save&lt;br /&gt;
&lt;br /&gt;
Starting in 2009 we began giving out randomized passwords for FreeBSD and Linux as the default password. That is stored with each system in Mgmt. You should look for and reset the password to that password in the event of a reset and refer the customer to use their original password from their welcome email- this way we don’t have to send the password again via email (in clear text).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== sendmail can’t be contacted from ext ip (only locally) ==&lt;br /&gt;
&lt;br /&gt;
By default redhat puts this line in sendmail.mc:&lt;br /&gt;
&lt;br /&gt;
 DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
which makes it only answer on localhost.  Comment it out like:&lt;br /&gt;
&lt;br /&gt;
 dnl DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA&#039;)&lt;br /&gt;
&lt;br /&gt;
and then rebuild sendmail.cf with:&lt;br /&gt;
&lt;br /&gt;
 m4 /etc/mail/sendmail.mc &amp;gt; /etc/sendmail.cf&lt;br /&gt;
&lt;br /&gt;
== virt doesn’t properly let go of ve’s ip(s) when moved to another system ==&lt;br /&gt;
&lt;br /&gt;
On virtuozzo 2.6 systems, it&#039;s been observed that when moving ips from one virt to another that sometimes the routing table will not get updated to reflect the removal of the ip addresses.&lt;br /&gt;
&lt;br /&gt;
A recent example was a customer that was moving to a new ve on a new virt and the ip addresses were traded between the two ve&#039;s.  After the trade the two systems were not able to talk to each other.  When looking at the routing table for the old system all the ip addresses were still in the routing table as being local, like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;netstat -rn | grep 69.55.225.149&lt;br /&gt;
69.55.225.149   0.0.0.0         255.255.255.255 UH       40 0          0 venet0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This was preventing traffic to the other system from being routed properly.&lt;br /&gt;
The solution is to manually delete the route:&lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.225.149 gw 0.0.0.0&lt;br /&gt;
&lt;br /&gt;
Supposedly, this was fixed in 2.6.1&lt;br /&gt;
&lt;br /&gt;
== sshd on FreeBSD 6.2 segfaults ==&lt;br /&gt;
&lt;br /&gt;
First try to reinstall ssh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/secure&lt;br /&gt;
cd lib/libssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.sbin/sshd&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&lt;br /&gt;
cd ../../usr.bin/ssh&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Failing that, find the library that’s messed up:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;ldd /usr/sbin/sshd&lt;br /&gt;
         libssh.so.3 =&amp;gt; /usr/lib/libssh.so.3 (0x280a3000) &lt;br /&gt;
         libutil.so.5 =&amp;gt; /lib/libutil.so.5 (0x280d8000) &lt;br /&gt;
         libz.so.3 =&amp;gt; /lib/libz.so.3 (0x280e4000) &lt;br /&gt;
         libwrap.so.4 =&amp;gt; /usr/lib/libwrap.so.4 (0x280f5000) &lt;br /&gt;
         libpam.so.3 =&amp;gt; /usr/lib/libpam.so.3 (0x280fc000) &lt;br /&gt;
         libbsm.so.1 =&amp;gt; /usr/lib/libbsm.so.1 (0x28103000) &lt;br /&gt;
         libgssapi.so.8 =&amp;gt; /usr/lib/libgssapi.so.8 (0x28112000) &lt;br /&gt;
         libkrb5.so.8 =&amp;gt; /usr/lib/libkrb5.so.8 (0x28120000) &lt;br /&gt;
         libasn1.so.8 =&amp;gt; /usr/lib/libasn1.so.8 (0x28154000) &lt;br /&gt;
         libcom_err.so.3 =&amp;gt; /usr/lib/libcom_err.so.3 (0x28175000) &lt;br /&gt;
         libroken.so.8 =&amp;gt; /usr/lib/libroken.so.8 (0x28177000) &lt;br /&gt;
         libcrypto.so.4 =&amp;gt; /lib/libcrypto.so.4 (0x28183000) &lt;br /&gt;
         libcrypt.so.3 =&amp;gt; /lib/libcrypt.so.3 (0x28276000) &lt;br /&gt;
         libc.so.6 =&amp;gt; /lib/libc.so.6 (0x2828e000) &lt;br /&gt;
         libmd.so.3 =&amp;gt; /lib/libmd.so.3 (0x28373000)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
md5 them and compare to other jail hosts or jails running on host&lt;br /&gt;
&lt;br /&gt;
for libcrypto reinstall:&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/src/crypto&lt;br /&gt;
make depend &amp;amp;&amp;amp; make all install&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Moving a server to a different account in the management system ==&lt;br /&gt;
&lt;br /&gt;
This is a tough one. You gotta touch a bunch of tables in the mysql database.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  systems&lt;br /&gt;
  subs&lt;br /&gt;
  ipmap&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
basically, get the sysid and go off that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
update systems set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update subs set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
update ipmap set cid=&#039;newcid&#039; where sysid=xxxxx&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You then need to rename the server disk space to change the old ID to the new one.&lt;br /&gt;
&lt;br /&gt;
For a jail -&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
stopjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;     69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;&lt;br /&gt;
mv 69.55.2XX.XXX-col0&amp;lt;oldid&amp;gt;-DIR 69.55.2XX.XXX-col0&amp;lt;newid&amp;gt;-DIR&lt;br /&gt;
vi /usr/local/jail/bin/quad1       [Update the ID in the quad file]&lt;br /&gt;
startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a virt -&lt;br /&gt;
You will have to &lt;br /&gt;
* stop the virt &lt;br /&gt;
* rename the directories &lt;br /&gt;
* modify the config file /vzconf/&amp;lt;veid&amp;gt;.conf &lt;br /&gt;
* rename the virt conf file &lt;br /&gt;
* restart the new virt.&lt;br /&gt;
&lt;br /&gt;
that ought to do it..&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Re-enable a customer account in the management system ==&lt;br /&gt;
&lt;br /&gt;
For example to re-instate col03438, In mysql:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
show tables ;&lt;br /&gt;
select * from customers where cid=&#039;col03438&#039; ;&lt;br /&gt;
update customers set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from  systems where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set cancel_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update systems set status=1 where cid=&#039;col03438&#039; ;&lt;br /&gt;
select * from subs where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set stop_date=NULL where cid=&#039;col03438&#039; ;&lt;br /&gt;
update subs set status=1 where cid=&#039;col03438&#039; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ganeti VPS =&lt;br /&gt;
&lt;br /&gt;
This is how to create a VPS on the Lamphost ganeti cluster.&lt;br /&gt;
I have created scripts for creating a FreeBSD VPS and one for&lt;br /&gt;
creating a Linux VPS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To create a linux VPS use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/new &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
( for example /root/new vps.example.com 30 2 69.55.238.10 centos-8 )&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To create a FreeBSD VPS, we have to do a full install.  This will create a &lt;br /&gt;
new VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /root/newfreebsd12 &amp;lt;name&amp;gt;  &amp;lt;disk size&amp;gt; &amp;lt;memory&amp;gt; &amp;lt;IP Addr&amp;gt; [&amp;lt;OS&amp;gt;]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to do the install via VNC.&lt;br /&gt;
&lt;br /&gt;
You need to customize the install by adding a &amp;quot;jci&amp;quot; account and a customer&lt;br /&gt;
account (col0XXXX) as well.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
You will need to install sudo.&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
pkg install sudo    # install sudo package&lt;br /&gt;
visudo              # to add wheel account to sudoers&lt;br /&gt;
vi /etc/group       # add jci and user acct to wheel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
You will also have to set up a console in /etc/ttys.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD 10.1 VPS =&lt;br /&gt;
&lt;br /&gt;
This is information on how to manage the FreeBSD 10.1 VPS servers on Jail5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 virtmake IP[,IP] &amp;lt;CID&amp;gt; hostname ipfw# email {FSP|FB|FS|FM|FP} [size in GB]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
For now, CID must be unique in the last 2 digits of the CID.&lt;br /&gt;
&lt;br /&gt;
Must do a &amp;quot;vm start &amp;lt;CID&amp;gt;&amp;quot; after creating a VPS.&lt;br /&gt;
&lt;br /&gt;
== Starting VPS ==&lt;br /&gt;
To start all vms on the system&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To start a single VPS &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List running VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm status&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attach to a user VPS console ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm attach &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To detach from a console, type tilde control-d (~ ctl-d).&lt;br /&gt;
&lt;br /&gt;
== Stop a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Restart a VPS ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm restart &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Modify a VPS ==&lt;br /&gt;
&lt;br /&gt;
To modify a VPS config, stop the VPS&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm stop &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mount the VPS data file&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm mount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /data/vm/private/&amp;lt;CID&amp;gt;/mount&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modify the user files necessary.&lt;br /&gt;
&lt;br /&gt;
Restart the VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vm umount &amp;lt;CID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vm start &amp;lt;CID&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes on a new FBSD 10.x VPS (jail6) ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#! /bin/sh&lt;br /&gt;
#&lt;br /&gt;
#  new  &amp;lt;colo#&amp;gt;  &amp;lt;disk&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
memory=1024&lt;br /&gt;
&lt;br /&gt;
x=&amp;quot;$1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
disk=&amp;quot;$2&amp;quot;&lt;br /&gt;
if [ &amp;quot;y&amp;quot; == &amp;quot;y$disk&amp;quot; ] ; then&lt;br /&gt;
    disk=60&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
disk2=`expr $disk - 4`&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vm image provision 04dcdf96-d843-11e6-b131-001e67fcc64f col0$x&lt;br /&gt;
# vm configure col0$x&lt;br /&gt;
sed -e &amp;quot;s/memory=256M/memory=${memory}M/&amp;quot; /vms/col01114/col01114.conf &amp;gt;/vms/col0$x/col0$x.conf&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
# vm start col0$x&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
vm console col0$x&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# to increase the customer disk&lt;br /&gt;
#&lt;br /&gt;
zfs set quota=${disk}G   zroot/vms/col0$x&lt;br /&gt;
zfs set volsize=${disk}G zroot/vms/col0$x/disk0&lt;br /&gt;
&lt;br /&gt;
##  on the customer VPS&lt;br /&gt;
&lt;br /&gt;
gpart recover vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapoff /dev/vtbd0p3&lt;br /&gt;
gpart delete -i 3 vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
sysctl kern.geom.debugflags=16&lt;br /&gt;
gpart resize -i 2 -a 4k -s $disk2\G vtbd0&lt;br /&gt;
&lt;br /&gt;
growfs /dev/vtbd0p2&lt;br /&gt;
&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
gpart add -t freebsd-swap -a 4k vtbd0&lt;br /&gt;
gpart show vtbd0&lt;br /&gt;
swapon /dev/vtbd0p3&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
reboot&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
  hostname&lt;br /&gt;
  IP Addr&lt;br /&gt;
passwd root&lt;br /&gt;
passwd user&lt;br /&gt;
clear history&lt;br /&gt;
  cp /dev/null /var/log/utx.log&lt;br /&gt;
  cp /dev/null ~root/.history&lt;br /&gt;
  history -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD Jails =&lt;br /&gt;
&lt;br /&gt;
== Starting jails: Quad/Safe Files ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD customer systems do not start up automatically at boot time.  When one of our freebsd machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. More on monitoring later.&lt;br /&gt;
&lt;br /&gt;
NOTE: &amp;gt;=7.x we have moved to 1 quad file: &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. Startups are not done by running each quad, but rather [[#startalljails|startalljails]] which relies on the contents of &amp;lt;tt&amp;gt;quad1&amp;lt;/tt&amp;gt;. The specifics of this are lower in this article. What follows here applies for pre 7.x systems.&lt;br /&gt;
&lt;br /&gt;
There are eight files in &amp;lt;tt&amp;gt;/usr/local/jail/rc.d&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail3# ls /usr/local/jail/rc.d/&lt;br /&gt;
quad1   quad2   quad3   quad4   safe1   safe2   safe3   safe4&lt;br /&gt;
jail3#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
four quad files and four safe files.&lt;br /&gt;
&lt;br /&gt;
Each file contains an even number of system startup blocks (total number of jails divided by 4)&lt;br /&gt;
 &lt;br /&gt;
The reason for this is, if we make one large script to startup all the systems at boot time, it will take too long - the first system in the script will start up right after system boot, which is great, but the last system may not start for another 20 minutes.&lt;br /&gt;
&lt;br /&gt;
Since there is no way to parralelize this during the startup procedure, we simply open four terminals (in screen window 9) and run each script, one in each terminal. This way they all run simultaneously, and the very last system in each startup script gets started in 1/4th the time it would if there was one large file&lt;br /&gt;
&lt;br /&gt;
The files are generally organized so that quad/safe 1&amp;amp;2 have only jails from disk 1, and quad/safe 3&amp;amp;4 have jails from disk 2. This helps ensure that only 2 fscks on any disk are going on at once. Further, they are balanced so that all quad/safe’s finish executing around the same time. We do this by making sure each quad/safe has a similar number of jails  and represents a similar number of inodes (see js).&lt;br /&gt;
&lt;br /&gt;
The other, very important reason we do it this way, and this is the reason there are quad files and safe files, is that in the event of a system crash, every single vn-backed filesystem that was mounted at the time of system crash needs to be fsck&#039;d.  However, fsck&#039;ing takes time, so if we shut the system down gracefully, we don&#039;t want to fsck.&lt;br /&gt;
&lt;br /&gt;
Therefore, we have two sets of scripts - the four quad scripts are identical to the four safe scripts except for the fact that the quad scripts contain fsck commands for each filesystem.&lt;br /&gt;
&lt;br /&gt;
So, if you shut a system down gracefully, start four terminals and run safe1 in window one, and safe2 in window 2, and so on.&lt;br /&gt;
 &lt;br /&gt;
If you crash, start four terminals (or go to screen window 9) and run quad1 in window one, and quad2 in window 2, and so on.&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 4.x version) quad2 from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
fsck -y /dev/vn16&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#fsck -y /dev/vn28&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
fsck -y /dev/vn22&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#fsck -y /dev/vn15&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, two of the systems specified are commented out - presumably those customers cancelled, or were moved to new servers.&lt;br /&gt;
&lt;br /&gt;
As you can see, the vnconfig line is the simpler command line, not the longer one that was used when it was first configured.  As you can see, all that is done is, vnconfig the filesystem, then fsck it, then mount it. The fourth command is the `jail` command used to start the system – but that will be covered later.&lt;br /&gt;
&lt;br /&gt;
Here is the safe2 file from jail17:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig /dev/vn16 /mnt/data2/69.55.228.7-col00820&lt;br /&gt;
mount /dev/vn16c /mnt/data2/69.55.228.7-col00820-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.7-col00820-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.7-col00820-DIR mail1.phimail.com 69.55.228.7 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# moved to data2 col00368&lt;br /&gt;
#vnconfig /dev/vn28 /mnt/data2/69.55.236.132-col00368&lt;br /&gt;
#mount /dev/vn28c /mnt/data2/69.55.236.132-col00368-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.236.132-col00368-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.236.132-col00368-DIR limehouse.org 69.55.236.132 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
vnconfig /dev/vn22 /mnt/data2/69.55.228.13-col01063&lt;br /&gt;
mount /dev/vn22c /mnt/data2/69.55.228.13-col01063-DIR&lt;br /&gt;
chmod 0666 /mnt/data2/69.55.228.13-col01063-DIR/dev/null&lt;br /&gt;
jail /mnt/data2/69.55.228.13-col01063-DIR www.widestream.com.au 69.55.228.13 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
# cancelled col00106&lt;br /&gt;
#vnconfig /dev/vn15 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
#mount /dev/vn15c /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
#chmod 0666 /mnt/data2/69.55.238.5-col00106-DIR/dev/null&lt;br /&gt;
#jail /mnt/data2/69.55.238.5-col00106-DIR mail.azebu.net 69.55.238.5 /bin/sh /etc/rc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, it is exactly the same, but it does not have the fsck lines.&lt;br /&gt;
&lt;br /&gt;
Take a look at the last entry - note that the file is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106&lt;br /&gt;
&lt;br /&gt;
and the mount point is named:&lt;br /&gt;
&lt;br /&gt;
 /mnt/data2/69.55.238.5-col00106-DIR&lt;br /&gt;
&lt;br /&gt;
This is the general format on all the FreeBSD systems.  The file is always named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber&lt;br /&gt;
&lt;br /&gt;
and the directory is named:&lt;br /&gt;
&lt;br /&gt;
 IP-custnumber-DIR&lt;br /&gt;
&lt;br /&gt;
If you run safe when you need a fsck, the mount will fail and jail will fail:&lt;br /&gt;
&lt;br /&gt;
 # mount /dev/vn1c /mnt/data2/jails/65.248.2.131-ns1.kozubik.com-DIR&lt;br /&gt;
 mount: /dev/vn1c: Operation not permitted&lt;br /&gt;
&lt;br /&gt;
No reboot needed, just run the quad script&lt;br /&gt;
&lt;br /&gt;
Starting with 6.x jails, we added block delimiters to the quad/safe files, the block looks like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: nuie.solaris.mu&#039;&lt;br /&gt;
fsck -y /dev/concat/v30v31a&lt;br /&gt;
mount /dev/concat/v30v31a /mnt/data1/69.55.228.218-col01441-DIR&lt;br /&gt;
mount_devfs devfs /mnt/data1/69.55.228.218-col01441-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.228.218-col01441-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.228.218-col01441-DIR nuie.solaris.mu 69.55.228.218 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: nuie.solaris.mu&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These are more than just informative when running quad/safe’s, the echo lines MUST be present for certain tools to work properly. So it’s important that any updates to the hostname also be updated on the 2 echo lines. For example, if you try to startjail a jail with a hostname which is on the jail line but not the echo lines, the command will return with host not found.&lt;br /&gt;
&lt;br /&gt;
=== FreeBSD 7.x+ notes ===&lt;br /&gt;
&lt;br /&gt;
Starting with the release of FreeBSD 7.x, we are doing jail startups in a slightly different way. First, thereis only 1 file: &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/quad1&amp;lt;/tt&amp;gt;&lt;br /&gt;
There are no other quads or corresponding safe files. The reason for this is twofold, 1. We can pass –C to fsck which will tell is to skip the fsck if the fs is clean (no more need for safe files), 2. We have a new startup script which can be launched multiple times, running in parallel to start jails, where quad1 is the master jail file. &lt;br /&gt;
Quad1 could still be run as a shell script, but it would take a very long time for it to run completely so it’s not advisable; or you should break it down into smaller chunks (like quad1, quad2, quad3, etc)&lt;br /&gt;
&lt;br /&gt;
Here is a snip of (a 7.x version) quad1 from jail2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;## begin ##: projects.tw.com&#039;&lt;br /&gt;
mdconfig -a -t vnode -f /mnt/data1/69.55.230.46-col01213 -u 50&lt;br /&gt;
fsck -Cy /dev/md50c&lt;br /&gt;
mount /dev/md50c /mnt/data1/69.55.230.46-col01213-DIR&lt;br /&gt;
mount -t devfs devfs /mnt/data1/69.55.230.46-col01213-DIR/dev&lt;br /&gt;
devfs -m /mnt/data1/69.55.230.46-col01213-DIR/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/69.55.230.46-col01213-DIR projects.tw.com 69.55.230.46 /bin/sh /etc/rc&lt;br /&gt;
echo &#039;## end ##: projects.tw.com&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cancelled jails are no longer commented out and stored in quad1, rather they’re moved to &amp;lt;tt&amp;gt;/usr/local/jail/rc.d/deprecated&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To start these jails, start the 4 ssh sessions as you would for a normal crash and then instead of running quad1-4, instead run startalljails in each window. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== Problems with the quad/safe files ==&lt;br /&gt;
&lt;br /&gt;
When you run the quad/safe files, there are two problems that can occur - either a particular system will hang during initialization, OR a system will spit out output to the screen, impeding your ability to do anything.  Or both.&lt;br /&gt;
&lt;br /&gt;
First off, when you start a jail, you see output like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Skipping disk checks ...&lt;br /&gt;
adjkerntz[25285]: sysctl(put_wallclock): Operation not permitted&lt;br /&gt;
Doing initial network setup:.&lt;br /&gt;
ifconfig: ioctl (SIOCDIFADDR): permission denied&lt;br /&gt;
lo0: flags=8049&amp;lt;UP,LOOPBACK,RUNNING,MULTICAST&amp;gt; mtu 16384&lt;br /&gt;
Additional routing options: TCP keepalive=YESsysctl:&lt;br /&gt;
net.inet.tcp.always_keepalive: Operation not permitted.&lt;br /&gt;
Routing daemons:.&lt;br /&gt;
Additional daemons: syslogd.&lt;br /&gt;
Doing additional network setup:.&lt;br /&gt;
Starting final network daemons:.&lt;br /&gt;
ELF ldconfig path: /usr/lib /usr/lib/compat /usr/X11R6/lib /usr/local/lib&lt;br /&gt;
a.out ldconfig path: /usr/lib/aout /usr/lib/compat/aout /usr/X11R6/lib/aout&lt;br /&gt;
Starting standard daemons: inetd cron sshd sendmail sendmail-clientmqueue.&lt;br /&gt;
Initial rc.i386 initialization:.&lt;br /&gt;
Configuring syscons: blanktime.&lt;br /&gt;
Additional ABI support:.&lt;br /&gt;
Local package initialization:.&lt;br /&gt;
Additional TCP options:.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, let&#039;s look at this line, near the end:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization:.&lt;br /&gt;
&lt;br /&gt;
This is where a list of daemons that are set to start at boot time willshow up.  You might see something like:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache sendmail sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
Or something like this:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: postgres postfix apache&lt;br /&gt;
&lt;br /&gt;
The problem is that many systems (about 4-5 per machine) will hang on that line.  Basically it will get to some of the way through the total daemons to be started:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache&lt;br /&gt;
&lt;br /&gt;
and will just sit there.  Forever.&lt;br /&gt;
&lt;br /&gt;
Fortunately, pressing ctrl-c will break out of it.  Not only will it break out of it, but it will also continue on that same line and start the other daemons:&lt;br /&gt;
&lt;br /&gt;
 Local package initialization: mysqld apache ^c sendmail-clientmqueue&lt;br /&gt;
&lt;br /&gt;
and then continue on to finish the startup, and then move to the next system to be started.&lt;br /&gt;
&lt;br /&gt;
So what does this mean?  It means that if a machine crashes, and you start four screen-windows to run four quads or four safes, you need to periodically cycle between them and see if any systems are stuck at that point, causing their quad/safe file to hang.  A good rule of thumb is, if you see a system at that point in the startup, give it another 100 seconds - if it is still at the exact same spot, hit ctrl-c. Its also a good idea to go back into the quad file (just before the first command in the jail startup block) and note that this jail tends to need a control-c or more time as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;### NOTE ### slow sendmail&#039;&lt;br /&gt;
echo &#039;### NOTE ###: ^C @ Starting sendmail.&#039;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NEVER&#039;&#039;&#039; hit ctrl-c repeatedly if you don&#039;t get an immediate response - that will cause the following jail’s startup commands to be aborted.&lt;br /&gt;
&lt;br /&gt;
A second problem that can occur is that a jail - maybe the first one in that particular quad/safe, maybe the last one, or maybe one in the middle, will start spitting out status or error messages from one of its init scripts.  This is not a problem - basically, hit enter a few times and see if you get a prompt - if you do get a prompt, that means that the quad/safe script has already completed.  Therefore it is safe to log out (and log out of the user that you su&#039;d from) and then log back in (if necessary).&lt;br /&gt;
&lt;br /&gt;
The tricky thing is, if a system in the middle starts flooding with messages, and you hit enter a few times and don&#039;t get a prompt.  Are you not getting a prompt because some subsequent system is hanging at the initialization, as we discussede above ?  Or are you not getting a prompt because that quad file is currently running an fsck ?  Usually you can tell by scrolling back in screen’s history to see what it was doing before you started getting the messages.&lt;br /&gt;
&lt;br /&gt;
If you don’t get clues from history, you have to use your judgement - instead of giving it 100 seconds to respond, perhaps give it 2-3 mins ... if you still get no response (no prompt) when you hit enter, hit ctrl-c.  However, be aware that you might still be hitting ctrl-c in the middle of an fsck.  This means you will get an error like &amp;quot;filesystem still marked dirty&amp;quot; and then the vnconfig for it will fail and so will the jail command, and the next system in the quad file will then start starting up.&lt;br /&gt;
&lt;br /&gt;
If this happens, just wait until the end of all the quad files have finished, and start that system manually.&lt;br /&gt;
&lt;br /&gt;
If things really get weird, like a screen flooded with errors, and you can&#039;t get a prompt, and ctrl-c does nothing, then you need to just eventually (give it ten mins or so) just kill that window with ctrl-p, then k, and then log in again and manually check which systems are now running and which aren&#039;t, and manually start up any that are not.&lt;br /&gt;
&lt;br /&gt;
Don&#039;t EVER risk running a particular quad/safe file a second time.&lt;br /&gt;
If the quad/safe script gets executed twice, reboot the machine immediately.&lt;br /&gt;
&lt;br /&gt;
So, for all the above reasons, anytime a machine crashes and you run all the quads or all the safes, &#039;&#039;&#039;always&#039;&#039;&#039; check every jail afterwards to make sure it is running - even if you have no hangs or complications at all.&lt;br /&gt;
Run this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailpsall|jailpsall]]&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: [[#postboot|postboot]] also populates ipfw counts, so it &#039;&#039;&#039;should not be run multiple times&#039;&#039;&#039;,  use &amp;lt;tt&amp;gt;jailpsall&amp;lt;/tt&amp;gt; for subsequent extensive ps’ing&lt;br /&gt;
&lt;br /&gt;
And make sure they all show as running.  If one does not show as running, check its /etc/rc.conf file first to see if maybe it is using a different hostname first before starting it manually.&lt;br /&gt;
&lt;br /&gt;
One thing we have implemented to alleviate these startup hangs and noisy jails, is to put jail start blocks that are slow or hangy at the bottom of the safe/quad file. Further, for each bad jail we note in each quad/safe just before the start block something like:&lt;br /&gt;
&lt;br /&gt;
 echo ‘### NOTE ### ^C @ Local package initialization: pgsqlmesg: /dev/ttyp1: Operation not permitted’&lt;br /&gt;
&lt;br /&gt;
That way we’ll be prepared to ^C when we see that message appear during the quad/safe startup process. If you observe a new, undocumented hang, &#039;&#039;&#039;after&#039;&#039;&#039; the quad/safe has finished, place a line similar to the above in the quad file, move the jail start block to the end of the file, then run [[#buildsafe|buildsafe]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 4.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which partition, IP to put it on, find an unused vn, and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vnX hostname shorthost quad/safe# ipfw# email [gb disk, default 4]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail14# /tmp/jailmake 69.55.226.152 /mnt/data2/69.55.226.152-col00182 vn23 negev.cerastes.org negev 4 182 cerastes@cerastes.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Making new customer jail 6.x ==&lt;br /&gt;
&lt;br /&gt;
1. run [[#js|js]] to figure out which gvinum vols are free (and which mnt (disk) each gvinum goes with/to), IP to put it on and choose which quad it should go in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. copy over dir and ip to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;tt&amp;gt;jailmake IP filepath vN[,vN] hostname shorthost quad/safe# ipfw# email&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 jail19# jailmake 69.55.236.17 /mnt/data1/69.55.236.17-col01574 v10 uncle-leo.commadev.com uncle-leo 1 1574 lee@commadev.com,lee@gmail.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing an ip for freebsd VPS ==&lt;br /&gt;
&lt;br /&gt;
*stopjail (hostname)&lt;br /&gt;
*on systems using vnfile or mdfile: rename vnfile/mdfile, using new ip&lt;br /&gt;
*rename dir using new ip&lt;br /&gt;
*Edit quad (make sure to update all lines: directory, vn/mdfile)&lt;br /&gt;
*buildsafe (not necessary on systems having only quad1)&lt;br /&gt;
*if you&#039;re adding an ip not already on the host machine: &amp;lt;tt&amp;gt;ipadd x.x.x.x&amp;lt;/tt&amp;gt;&lt;br /&gt;
*startjail (hostname)&lt;br /&gt;
if backups: &amp;lt;tt&amp;gt;mvbackups&amp;lt;/tt&amp;gt; ...&lt;br /&gt;
*Edit dir/ip in mgmt&lt;br /&gt;
*if any rules, update any firewall rules to use new ip (careful not to make sure there arent multiple rules per ipfw#- search by colo#)&lt;br /&gt;
*(if customer asks and has domains) update their domains on ns1c&lt;br /&gt;
*optional: update ipfw counters on the host server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Rename a gconcat vol ==&lt;br /&gt;
&lt;br /&gt;
*stop jail, unmount everything&lt;br /&gt;
*gconcat stop vol&lt;br /&gt;
*gconcat clear /dev/gvinum/vN&lt;br /&gt;
*gconcat clear /dev/gvinum/vN+1&lt;br /&gt;
*gconcat label -v newname /dev/gvinum/vN /dev/gvinum/vN+1&lt;br /&gt;
*bsdlabel /dev/concat/newname&lt;br /&gt;
(make sure a partition is not fstype unused, if so change to: 4.2BSD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Remaking a VPS (on same jail) ==&lt;br /&gt;
&lt;br /&gt;
# take him down (stopjail)&lt;br /&gt;
# make note of host, vn/gvin/md, ip&lt;br /&gt;
# remove from quad/safe&lt;br /&gt;
# move vnfile (rename)&lt;br /&gt;
# remove dir (jailmake will make a new one)&lt;br /&gt;
# remove ipfw counts&lt;br /&gt;
# jailmake&lt;br /&gt;
# update backup script&lt;br /&gt;
# update db with new dir/ip (use the “move” link, mark old as stopped today)&lt;br /&gt;
# update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (FreeBSD) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all jails back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log. If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. Note, if you see messages about swap space exhausted, the server is obviously out of memory, however it may recover briefly enough for you to get a jtop in to see who&#039;s lauched a ton of procs (most likely) and then issue a quick jailkill to get it back under control.&lt;br /&gt;
&lt;br /&gt;
If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card (as root, using the standard root pass) and issue &lt;br /&gt;
 racadm serveraction hardreset&lt;br /&gt;
then you will need someone at the data center power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console:&lt;br /&gt;
 tip jailX&lt;br /&gt;
immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
IMPORTANT NOTE: on some older FreeBSD systems, there will be no output to the video (KVM) console as it boots up. The console output is redirected to the serial port ... so if a jail crashes, and you attach a kvm, the output during the bootup procedure will not be shown on the screen. However, when the bootup is done, you will get a login prompt on the screen and will be able to log in as normal.  &amp;lt;tt&amp;gt;/boot/loader.conf&amp;lt;/tt&amp;gt; is where serial console redirect output lives, so comment that if you want to catch output on kvm.&lt;br /&gt;
On newer systems it sends most output to both locations. &lt;br /&gt;
&lt;br /&gt;
=== Assess the heath of the server ===&lt;br /&gt;
Once the server boots up fully, you should be able to ssh in. Look around- make sure all the mounts are there and reporting the correct size/usage (i.e. /mnt/data1 /mnt/data2 /mnt/data3 - look in /etc/fstab to determine which mount points should be there), check to see if RAID mirrors are healthy. See [[RAID_Cards#Common_CLI_commands_.28megacli.29|megacli]], [[#aaccheck|aaccheck]]&lt;br /&gt;
&lt;br /&gt;
Before you start the jails, you need to run [[#preboot|preboot]]. This will do some assurance checks to make sure things are prepped to start the jails. Any issues that come out of preboot need to be addressed before starting jails.&lt;br /&gt;
&lt;br /&gt;
=== Start jails ===&lt;br /&gt;
[[#Starting_jails:_Quad.2FSafe_Files|More on starting jails]]&lt;br /&gt;
Customer jails (the VPSs) do not start up automatically at boot time. When a FreeBSD machines boots up, it boots up, and does nothing else. To start jails, we put the commands to start each jail into a shell script(s) and run the script(s). Jail startup is something that needs to be actively monitored, which is why we don’t just run the script automatically. &lt;br /&gt;
&lt;br /&gt;
In order to start jails, we run the quad files: quad1 quad2 quad3 and quad4 (on new systems there is only quad1). If the machine was cleanly rebooted- which wouldn&#039;t be the case if this was a crash, you may run the safe files (safe1 safe2 safe3 safe4) in lieu of quads. &lt;br /&gt;
&lt;br /&gt;
Open up 4 logins to the server (use the windows in [[Screen#Screen_Organization|a9]])&lt;br /&gt;
In each of the 4 windows you will:&lt;br /&gt;
&lt;br /&gt;
If there is a [[#startalljails|startalljails]] script (and only quad1), run that command in each of the 4 windows. It will parse through the quad1 file and start each jail. Follow the instructions [[#Problems_with_the_quad.2Fsafe_files|here]] for monitoring startup. Note that you can be a little more lenient with jails that take awhile to start- startalljails will work around the slow jails and start the rest. As long as there aren&#039;t 4 jails which are &amp;quot;hung&amp;quot; during startup, the rest will get started eventually.&lt;br /&gt;
	-or-&lt;br /&gt;
If there is no startalljails script, there will be multiple quad files. In each of the 4 windows, start each of the quads. i.e. start quad1 in window1, quad2 in window2 and so on. DO NOT start any quad twice. It will crash the server. If you accidentally do this, just jailkill all the jails which are in the quad and run the quad again. Follow the instructions here for monitoring quad startup.&lt;br /&gt;
&lt;br /&gt;
Note the time the last jail boots- this is what you will enter in the crash log.&lt;br /&gt;
&lt;br /&gt;
Save the crash log.&lt;br /&gt;
&lt;br /&gt;
=== Check to make sure all jails have started ===&lt;br /&gt;
There&#039;s a simple script which will make sure all jails have started, and enter the ipfw counter rules: [[#postboot|postboot]] &lt;br /&gt;
Run postboot, which will do a jailps on each jail it finds (excluding commented out jails) in the quad file(s). We&#039;re looking for 2 things:&lt;br /&gt;
# systems spawning out of control or too many procs&lt;br /&gt;
# jails which haven&#039;t started&lt;br /&gt;
On 7.x and newer systems it will print out the problems (which jails haven&#039;t started) at the conclusion of postboot. &lt;br /&gt;
On older systems you will need to watch closely to see if/when there&#039;s a problem, namely:&lt;br /&gt;
 &lt;br /&gt;
 [hostname] doesnt exist on this server&lt;br /&gt;
&lt;br /&gt;
When you get this message, it means one of 2 things:&lt;br /&gt;
1. the jail really didn&#039;t start:&lt;br /&gt;
When a jail doesn&#039;t start it usually boils down to a problem in the quad file. Perhaps the path name is wrong (data1 vs data2) or the name of the vn/mdfile is wrong. Once this is corrected, you will need to run the commands from the quad file manually, or you may use &amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. the customer has changed their hostname (and not told us) so their jail &#039;&#039;is&#039;&#039; running, just under a different hostname:&lt;br /&gt;
On systems with jls, this is easy to rectify. First, get the customer info: &amp;lt;tt&amp;gt;g &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
Then look for the customer in jls: &amp;lt;tt&amp;gt;jls | grep &amp;lt;col0XXXX&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
From there you will see their new hostname- you should update that hostname in the quad file: don&#039;t forget to edit it on the &amp;lt;tt&amp;gt;## begin ##&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;## end ##&amp;lt;/tt&amp;gt; lines, and in mgmt. &lt;br /&gt;
On older systems without jls, this will be harder, you will need to look further to see their hostname- perhaps its in their /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once all jails are started, do some spot checks- try to ssh or browse to some customers, just to make sure things are really ok.&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 7.x/8.x jail ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (md) ==&lt;br /&gt;
&lt;br /&gt;
NOTE: this doesn’t apply to mx2 which uses gvinum. Use same procedure as 6.x&lt;br /&gt;
NOTE: if you unmount before mdconfig, re-mdconfig (attach) then unmount then mdconfig -u again &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
(parts to change/customize are &amp;lt;tt&amp;gt;&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;red&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from &amp;lt;tt&amp;gt;js&amp;lt;/tt&amp;gt;. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Umount it (including their devfs) but leave the md config’d (so if you use stopjail, you will have to re-mdconfig it)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mdfile and mount name, and to see the current md device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. When there&#039;s enough room to place new system on an alternate, or the same drive:&lt;br /&gt;
USE CAUTION not to overwrite (touch, mdconfig) existing md!!&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s 10g -f /mnt/data3/69.55.234.66-col01334 -u 97&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md97&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new space is on a different drive, move the mount point directory AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
md-unconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
archive old mdfile. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/old-col00241-mdfile-noarchive-20091211&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (vq1) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;97&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
(run &amp;lt;tt&amp;gt;df&amp;lt;/tt&amp;gt; to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, &amp;lt;tt&amp;gt;du&amp;lt;/tt&amp;gt; the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure and remove old mdfile&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mdconfig -d -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
rm /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.237.26-col00241&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(there should now be enough space to recreate your bigger system. If not, run sync a couple times)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create the new system (ok to reuse old mdfile and md#):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;touch /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mdconfig -a -t vnode -s &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;10&amp;lt;/span&amp;gt;g -f /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334&amp;lt;/span&amp;gt; -u &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to /dev/md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
umount nfs:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If md# changed (or mount point), edit quad (&amp;lt;tt&amp;gt;vq1&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new md number (md&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. Optional: archive old mdfile&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;br&amp;gt;&lt;br /&gt;
gzip -c old-col01588-mdfile-noarchive-20120329 &amp;gt; /deprecated/old-col01588-mdfile-noarchive-20120329.gz&amp;lt;br&amp;gt;&lt;br /&gt;
mbu&amp;lt;br&amp;gt;&lt;br /&gt;
rm  old-col01588-mdfile-noarchive-20120329&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 6.x jail (gvinum/gconcat) ==&lt;br /&gt;
or&lt;br /&gt;
== Moving customer to a different drive (gvinum/gconcat) ==&lt;br /&gt;
&lt;br /&gt;
(parts to change are &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;highlighted&amp;lt;/span&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them (explains about downtime, etc).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to put the customers new space on a different partition (and create the new md on the new partition). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. make a mental note of how much space they&#039;re currently using&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. &amp;lt;tt&amp;gt;[[#stopjail|stopjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;[[#g|g]] &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info (IP/cust#) needed to feed the new mount name and existing volume/device. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5a. When there&#039;s enough room to place new system on an alternate, or the same drive (using only UNUSED - including if it&#039;s in use by the system in question - gvinum volumes):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure the new device:&amp;lt;br&amp;gt;&lt;br /&gt;
A. for a 2G system (single gvinum volume):&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;bsdlabel -r -w /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
-or- &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
B. for a &amp;gt;2G system (create a gconcat volume):&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Other valid gconcat examples:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label v82-v84v109v112 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v109 /dev/gvinum/v112&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat label v82v83 /dev/gvinum/v82 /dev/gvinum/v83&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Note, long names will truncate: v144v145v148-v115 will truncate to v144v145v148-v1 (so you will refer to it as v144v145v148-v1 thereafter)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; OR &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
A. &amp;lt;tt&amp;gt;mount /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
-or-&amp;lt;br&amp;gt;&lt;br /&gt;
B. &amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
do the dump and pipe directly to restore:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f - /dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt; | restore -r -f - &amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/gvinum/v&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;123&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;) , run &amp;lt;tt&amp;gt;buildsafe&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5b. When there&#039;s not enough room on an alternate partition, or on the same drive...but there is enough room if you were to remove the existing customer&#039;s space (i.e. if you want/need to reuse the existing gvinum volumes and add on more):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
mount backup nfs mounts:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mbm&amp;lt;/tt&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
(run df to confirm backup mounts are mounted)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
dump the customer to backup2 or backup1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;dump -0a -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; /dev/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gconcat/v106-v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(when complete WITHOUT errors, du the dump file to confirm it matches size, roughly, with usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unconfigure the old gconcat volume&amp;lt;br&amp;gt;&lt;br /&gt;
list member gvinum volumes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat list &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Output will resemble:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;Geom name: v106v107&lt;br /&gt;
State: UP&lt;br /&gt;
Status: Total=2, Online=2&lt;br /&gt;
Type: AUTOMATIC&lt;br /&gt;
ID: 3530663882&lt;br /&gt;
Providers:&lt;br /&gt;
1. Name: concat/v106v107&lt;br /&gt;
   Mediasize: 4294966272 (4.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e2&lt;br /&gt;
Consumers:&lt;br /&gt;
1. Name: gvinum/sd/v106.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 0&lt;br /&gt;
   End: 2147483136&lt;br /&gt;
2. Name: gvinum/sd/v107.p0.s0&lt;br /&gt;
   Mediasize: 2147483648 (2.0G)&lt;br /&gt;
   Sectorsize: 512&lt;br /&gt;
   Mode: r1w1e3&lt;br /&gt;
   Start: 2147483136&lt;br /&gt;
   End: 4294966272&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
stop volume and clear members&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat stop &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
gconcat clear &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;gvinum/sd/v106.p0.s0 gvinum/sd/v107.p0.s0&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
create new device- and its ok to reuse old/former members&amp;lt;br&amp;gt;&lt;br /&gt;
try to grab a contiguous block of gvinum volumes. gconcat volumes MAY NOT span drives (i.e. you cannot use a gvinum volume from data3 and a volume from data2 in the same gconcat volume). &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;gconcat label &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107 /dev/gvinum/v82 /dev/gvinum/v83 /dev/gvinum/v84 /dev/gvinum/v106 /dev/gvinum/v107&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
bsdlabel -r -w /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional- if new volume is on a different drive, move the mount point directory (get the drive from js output) AND use that directory in the mount and cd commands below:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mv /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
confirm you are mounted to the device (/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;) and space is correct:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;mount /dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd /mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3/69.55.234.66-col01334-DIR&amp;lt;/span&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
df .&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
do the restore from the dumpfile on the backup server:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;restore -r -f /backup&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;4/col00241.20120329.noarchive.dump&amp;lt;/span&amp;gt; .&amp;lt;br&amp;gt;&lt;br /&gt;
rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
when dump/restore completes successfully, use df to confirm the restored data size matches the original usage figure&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
edit quad (&amp;lt;tt&amp;gt;vq&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;1&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;) to point to new (/mnt/data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;) location AND new volume (&amp;lt;tt&amp;gt;/dev/concat/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;v82-v84v106v107&amp;lt;/span&amp;gt;a&amp;lt;/tt&amp;gt;), run buildsafe&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
restart the jail:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;startjail &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
TODO: clean up/clear old gvin/gconcat vol&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. update backup list AND move backups, if applicatble&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;mvbackups &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;69.55.237.26-col00241&amp;lt;/span&amp;gt; jail&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;9&amp;lt;/span&amp;gt; data&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;3&amp;lt;/span&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - steps to tack on a new gvin to existing gconcat- leads to corrupted fs&lt;br /&gt;
bsdlabel -e /dev/concat/v82-v84&lt;br /&gt;
&lt;br /&gt;
To figure out new size of the c partition, multiply 4194304 by the # of 2G gvinum volumes and subtract the # of 2G volumes:&lt;br /&gt;
10G: 4194304 * 5 – 5 = 20971515&lt;br /&gt;
8G: 4194304 * 4 – 4 = 16777212&lt;br /&gt;
6G: 4194304 * 3 – 3 = 12582909&lt;br /&gt;
4G: 4194304 * 2 – 2 = 8388606&lt;br /&gt;
&lt;br /&gt;
To figure out the new size of the a partition, subtract 16 from the c partition:&lt;br /&gt;
10G: 20971515 – 16 = 20971499&lt;br /&gt;
8G: 16777212 – 16 = 16777196&lt;br /&gt;
6G: 12582909 – 16 = 12582893&lt;br /&gt;
4G: 8388606 – 16  = 8388590&lt;br /&gt;
&lt;br /&gt;
Orig:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a:  8388590       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c:  8388606        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
New:&lt;br /&gt;
8 partitions:&lt;br /&gt;
#        size   offset    fstype   [fsize bsize bps/cpg]&lt;br /&gt;
  a: 12582893       16    4.2BSD     2048 16384 28552&lt;br /&gt;
  c: 12582909        0    unused        0     0         # &amp;quot;raw&amp;quot; part, don&#039;t edit&lt;br /&gt;
&lt;br /&gt;
sync; sync&lt;br /&gt;
&lt;br /&gt;
growfs /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
fsck –fy /dev/concat/v82-v84a&lt;br /&gt;
&lt;br /&gt;
(keep running fsck’s till NO errors)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Adding disk to a 4.x jail ==&lt;br /&gt;
&lt;br /&gt;
NOTE: if you unmount before vnconfig, re-vnconfig then unmount then vnunconfig&lt;br /&gt;
 &lt;br /&gt;
If someone wants more disk space, there’s a paste for it, send it to them.&lt;br /&gt;
&lt;br /&gt;
1. Figure out the space avail from [[#js|js]]. Ideally, you want to dump to a different partition and create the new vn on the current partition. If no space to make make dump, then move old vn’s to backup or worse, dump to backup.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; to get the info&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. make vnfile, newfs and mount. USE CAUTION not to overwrite existing vn. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;vnconfig –T -S 9g -s labels -c /dev/vn22 /mnt/data2/65.214.160.117-col00496&amp;lt;br&amp;gt;&lt;br /&gt;
disklabel -r -w vn30 auto&amp;lt;br&amp;gt;&lt;br /&gt;
newfs /dev/vn30c&amp;lt;br&amp;gt;&lt;br /&gt;
mkdir /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
mount /dev/vn30c /mnt/data2/69.55.237.26-col00241-DIR&amp;lt;br&amp;gt;&lt;br /&gt;
cd /mnt/data2/65.214.160.117-col00496-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. &amp;lt;tt&amp;gt;jailkill &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. &amp;lt;tt&amp;gt;dump -0a -f - /dev/vn1 | restore -r -f -&amp;lt;/tt&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. &amp;lt;tt&amp;gt;rm restoresymtable&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. unmount and unvnconfig old system:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;umount /dev/vnNNc &amp;lt;br&amp;gt;&lt;br /&gt;
vnconfig –u /dev/vnNN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. move vnfile. Ex: &amp;lt;tt&amp;gt;mv /mnt/data1/69.55.237.26-col00241 /mnt/data1/old-col00241-vnfile-20110312-noarchive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove old dir: &amp;lt;tt&amp;gt;rmdir /mnt/data1/69.55.237.26-col00241-DIR&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit quad/safe to point to new location, run &amp;lt;tt&amp;gt;[[#buildsafe|buildsafe]]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start jail: &amp;lt;tt&amp;gt;[[#startjail|startjail]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. update disk (and dir if applicable) in mgmt screen&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
13. mv backups if necessary&lt;br /&gt;
 &lt;br /&gt;
WARNING: if you restore to unmounted vn then you’re actually restoring to /mnt/data1&lt;br /&gt;
you can tar and do above to recover&lt;br /&gt;
 &lt;br /&gt;
NOTE: you can’t move a customer to a system that has a base sys that is diff from the one they came from (BSD 4.8 to 4.5 = won’t work)&lt;br /&gt;
 &lt;br /&gt;
NOTE: we don’t mount procfs&#039;s to anyone’s system by default.&lt;br /&gt;
&lt;br /&gt;
NOTE: mount_nulls: &lt;br /&gt;
also goofy mount_nulls seen in df -k on jail2&amp;lt;br&amp;gt;&lt;br /&gt;
also needs to be done in safe and quad&amp;lt;br&amp;gt;&lt;br /&gt;
if he wants another, use safe/quad as example and add it to file. also exec command at prompt to add mount (no reboot, remount or rejail necessary)&lt;br /&gt;
&lt;br /&gt;
== Moving customer to another jail machine ==&lt;br /&gt;
&lt;br /&gt;
Systems should only be moved between similarly-versioned jails. If you cannot get a match, you should go to freebsd.org and see what the differences were between the version you&#039;re moving from and the new version. Many of the updates will be to drivers and kernels - these don&#039;t affect the customer. Perhaps they updated traceroute or a library...you should copy the new/changed files into the VPS/jail on the new host. An alternate method for moving to a different version would be to buildworld to bring their full distribution up to date. Optional steps for this kind of upgrade are included below.&lt;br /&gt;
&lt;br /&gt;
1. &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. &amp;lt;tt&amp;gt;[[#jailkill|jailkill]] &amp;lt;hostname&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. create new device on target system&amp;lt;br&amp;gt;&lt;br /&gt;
4.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;vnconfig –T –S 4g –s labels –c /dev/vn1 /mnt/data1/69.55.22x.x-col00XXX&lt;br /&gt;
disklabel –r –w vn1 auto&lt;br /&gt;
newfs /dev/vn1c&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6.x:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;bsdlabel -r -w /dev/gvinum/v1&lt;br /&gt;
newfs /dev/gvinum/v1a&lt;br /&gt;
 or &lt;br /&gt;
gconcat label v1-v3 /dev/gvinum/v1 /dev/gvinum/v2 /dev/gvinum/v3&lt;br /&gt;
bsdlabel -r -w /dev/concat/v1-v3&lt;br /&gt;
newfs /dev/concat/v1-v3a&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7.x+:&amp;lt;br&amp;gt;&lt;br /&gt;
Run jailmakeempty then skip step 4 &amp;amp; 7 below&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. make and mount the dir on new system. &amp;lt;br&amp;gt;&lt;br /&gt;
Ex: &amp;lt;pre&amp;gt;mkdir /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
chmod 755 /mnt/data2/69.55.230.3-col00123-DIR&lt;br /&gt;
mount [device] /mnt/data2/69.55.230.3-col00123-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5. [[#stopjail|stopjail]] &amp;lt;hostname&amp;gt; 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. dump fs to new system. ex: &amp;lt;tt&amp;gt;[[#dumpremoterestore|dumpremoterestore]] /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&amp;lt;/tt&amp;gt; (make sure you can ssh as root on remote machine) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6a. OPTIONAL BUILDWORLD:&amp;lt;br&amp;gt;&lt;br /&gt;
 cd /usr/src&lt;br /&gt;
 make world DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
 cd etc&lt;br /&gt;
 make distribution DESTDIR=/mnt/data2/69.55.xxx.xx-col0xxxx-DIR&lt;br /&gt;
&lt;br /&gt;
(you may have to rm an openssh file and re-make dist)&lt;br /&gt;
&lt;br /&gt;
 rm -rf /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/periodic/daily/400.status-disks&lt;br /&gt;
 vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
replace: &amp;lt;tt&amp;gt;MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
with: &amp;lt;tt&amp;gt;MP=&#039;/&#039; (use single quotes)&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ask user if they want ports overwritten with current, if yes:&lt;br /&gt;
 cp -r /usr/ports /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/usr&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d .&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target: &lt;br /&gt;
 cd /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc; vipw –d . &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
on source: &lt;br /&gt;
 cat /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group&lt;br /&gt;
(copy in all info)&lt;br /&gt;
&lt;br /&gt;
on target:&lt;br /&gt;
 cat &amp;gt; /mnt/data2/69.55.xxx.xx-col0xxxx-DIR/etc/group &lt;br /&gt;
(paste all info)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. edit quad on source system copy over entries for the jail to the target system &lt;br /&gt;
take care that the vn/gvinum/gconcat/md devices on the target system are’t in use and that the /mnt/dataN path doesn’t need changing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8. run [[#buildsafe|buildsafe]] (if this is &amp;lt;=6.x) on target system (copies changes made to quad into safe file)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
9. remove ip from source system. Ex: &amp;lt;tt&amp;gt;ipdel 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. add ip to target system. Ex: &amp;lt;tt&amp;gt;ipadd 69.55.230.3&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
11. start new system, with [[#startjail|startjail]] or manually, pasting entries found from running &amp;lt;tt&amp;gt;g &amp;lt;customerID&amp;gt;&amp;lt;/tt&amp;gt; on the new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
12. run &amp;lt;tt&amp;gt;[[#canceljail|canceljail]] col0xxxx&amp;lt;/tt&amp;gt; on source system. This should prompt you to remove backups if any existed. Do not let it do this, rather:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
13. if backups existed move them to the new host via the &amp;lt;tt&amp;gt;[[#mvbackups_.28freebsd.29|mvbackups]] script&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
14. edit quad on source system, edit comment to reflect a move rather than a cancel, ex: &amp;lt;tt&amp;gt;# moved to jail2 col00241&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
15. edit mgmt to reflect new host and dir for new system&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
16. optional add ipfw rules on new system. Ex:&lt;br /&gt;
&amp;lt;tt&amp;gt;ipfw add 01231 count ip from 69.55.230.2 to any&lt;br /&gt;
ipfw add 01232 count ip from any to 69.55.230.2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Increasing inodes for a VPS ==&lt;br /&gt;
&lt;br /&gt;
when doing the newfs: &lt;br /&gt;
 newfs -i  4096 ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Jail services unable to contact services within same jail ==&lt;br /&gt;
&lt;br /&gt;
This is due to a messed up routing table (only seen it on jail2 before). &lt;br /&gt;
&lt;br /&gt;
 route delete 69.55.228.65/32 route add -net 69.55.228.65/32 -iface fxp0 -nostatic -cloning route add 69.55.228.65 -iface lo0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== /dev/null permission resets == &lt;br /&gt;
&lt;br /&gt;
Applies to 4.x jails only&lt;br /&gt;
&lt;br /&gt;
For some reason that I do not understand at all, the /dev/null node in customer jails, after the system is restarted, often reverts permissions to 0600 ... which is bad, because non-root processes cannot redirect to /dev/null anymore.  In fact, a fair number of server daemons will complain and/or fail if this is the case.&lt;br /&gt;
&lt;br /&gt;
So, after a restart, after all systems have restarted, run postboot, which does something like:&lt;br /&gt;
 &lt;br /&gt;
 for f in `df -k | grep /dev/vn | awk &#039;{print $6}&#039;` ; do chmod 0666 $f/dev/null ; done&lt;br /&gt;
&lt;br /&gt;
Later, after the system is up, if anyone ever complains about their /dev/null permissions for any reason ... really if anyone mentions&lt;br /&gt;
/dev/null in any way, tell them to run:&lt;br /&gt;
&lt;br /&gt;
 chmod 0666 /dev/null&lt;br /&gt;
&lt;br /&gt;
We have ultimately fixed this by adding chmods to the quad/safe&#039;s&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mknod in a jail == &lt;br /&gt;
&lt;br /&gt;
For a customer who wants to run a chroot name server (named) in a jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /mnt/data1/&amp;lt;user directory&amp;gt;/var/named&lt;br /&gt;
mknod -c 1 1 null&lt;br /&gt;
mknod -c 2 3 random&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Postfix Problems ==&lt;br /&gt;
&lt;br /&gt;
This hasn&#039;t happened in a long time, probably no longer valid with modern OS&#039;s&lt;br /&gt;
&lt;br /&gt;
Postfix is an alternate MTA - people replace sendmail with it in much the same way that many people also replace sendmail with qmail.  The problem is that if you install postfix inside of a jail, by default it will not work properly - the user will see error messages like this:&lt;br /&gt;
&lt;br /&gt;
incoming mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 mail_queue_enter: create file incoming/298266.55902: File too large&lt;br /&gt;
&lt;br /&gt;
and further, sending mail generates errors like this:&lt;br /&gt;
&lt;br /&gt;
 postdrop: warning: mail_queue_enter: create file maildrop/627930.56676: File too large&lt;br /&gt;
&lt;br /&gt;
This is very easy to solve.  In fact, the FreeBSD welcome email that is sent out by `jailmake` now contains this block of text:&lt;br /&gt;
&lt;br /&gt;
- if you plan on installing postfix, email us first and ask for a necessary patch.&lt;br /&gt;
&lt;br /&gt;
So, anytime you see a support email that says anything about postfix not working, paste that line from the welcome email into your response to them, and right away, forward the postfix patch to them in an email.&lt;br /&gt;
&lt;br /&gt;
Here are the instructions:&lt;br /&gt;
&lt;br /&gt;
Simply copy the patch into /usr/ports/mail/postfix/files, rename the file to ‘patch-file_limit.c’ and then cd to /usr/ports/mail/postfix and run `make install`.&lt;br /&gt;
&lt;br /&gt;
The postfix patch is the very first email in the support email box, and we never delete or save it because we always want it there to forward to people.  The instructions in the email are very clear, and it solves their problem 100% of the time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems with `find` ==&lt;br /&gt;
&lt;br /&gt;
There was a bug in the FreeBSD vn-filesystem code.  The bug causes the system to crash or hang when someone runs the `find` command inside their vn-backed filesystem (and even though the bug may have been fixed, due to the i/o overhead, we still don&#039;t like find).&lt;br /&gt;
&lt;br /&gt;
Now, this does not always happen - however, any jail machine with more than 20 systems on it will crash every night if every system on it runs the daily periodic script out of their crontab, and updates their locate database with the find command.&lt;br /&gt;
&lt;br /&gt;
(the locate database is a small, FreeBSD specific database that is populated nightly from a cron job, and is then used to provide fast answers to the `locate` command)&lt;br /&gt;
&lt;br /&gt;
In normal systems (32-40 jails on them), all that has to be done is make sure nobody runs the daily periodic.  This means that the file /etc/crontab, instead of looking like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# do daily/weekly/monthly maintenance&lt;br /&gt;
1      3       *       *       *       root    periodic daily&lt;br /&gt;
15     4       *       *       6       root    periodic weekly&lt;br /&gt;
30     5       1       *       *       root    periodic monthly&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Needs to look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# DO NOT UNCOMMENT THESE - contact support@johncompanies.com for details&lt;br /&gt;
#1      3       *       *       *       root    periodic daily&lt;br /&gt;
#15     4       *       *       6       root    periodic weekly&lt;br /&gt;
#30     5       1       *       *       root    periodic monthly&lt;br /&gt;
## DO NOT UNCOMMENT THESE ^^^^^^^^&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the problem will generally not occur at all.  New filesystem images are always altered in this way, so all systems are set like this.&lt;br /&gt;
&lt;br /&gt;
However, if you have enough systems running, even casual uses of find, outside of the periodic scripts, can cause the system to crash.  We are seeing this on jail13, which crashes about once per week.&lt;br /&gt;
&lt;br /&gt;
There is not yet any resolution to this problem.&lt;br /&gt;
&lt;br /&gt;
So, if you have a freebsd system crash, see if anyone has uncommented their daily periodic line - you can see them for everyone by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;for f in `df -k | grep vn | awk &#039;{print $6}&#039;` ; do echo `cat $f/etc/crontab | grep &amp;quot;periodic daily&amp;quot;` $f ; done | more&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(all one line, one command)&lt;br /&gt;
&lt;br /&gt;
And then edit those /etc/crontabs and comment them out again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Problems un-mounting - and with mount_null’s ==&lt;br /&gt;
&lt;br /&gt;
If you cannot unmount a filesystem, beacuse it says the filesystem is busy, it is because of three things:&lt;br /&gt;
&lt;br /&gt;
a) the jail is still running&lt;br /&gt;
&lt;br /&gt;
b) you are actually in that directory, even though the jail is stopped&lt;br /&gt;
&lt;br /&gt;
c) there are still dev, null_mount or linprocfs mount points mounted inside that directory.&lt;br /&gt;
&lt;br /&gt;
d) when trying to umount null_mounts that are really long and you get an error like “No such file or directory”, it’s an OS bug where the dir is truncated. No known fix&lt;br /&gt;
&lt;br /&gt;
e) there are still files open somewhere inside the dir. Use &amp;lt;tt&amp;gt;fstat | grep &amp;lt;cid&amp;gt;&amp;lt;/tt&amp;gt; to find the process that has files open&lt;br /&gt;
&lt;br /&gt;
f) Starting with 6.x, the jail mechanism does a poor job of keeping track of processes running in a jail and if it thinks there are still procs running, it will refuse to umount the disk. If this is happening you should see a low number in the #REF column when you run jls. In this case you &#039;&#039;can&#039;&#039; safely &amp;lt;tt&amp;gt;umount –f&amp;lt;/tt&amp;gt; the mount. &lt;br /&gt;
&lt;br /&gt;
Please note -if you forcibly unmount a (4.x) filesystem that has null_mounts&lt;br /&gt;
still mounted in it, the system &#039;&#039;&#039;will crash&#039;&#039;&#039; within 10-15 mins.&lt;br /&gt;
&lt;br /&gt;
== Misc jail Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space on jail2, jail8, jail9, a couple jails on jail17, jail4, jail12 and jail18.&lt;br /&gt;
Even though the vn file shows 4G size, it doesn’t actually occupy that amount of space on the disk. So be careful not to fill up drives where we’re overselling – use oversellcheck to confirm you’re not oversold by more than 10G.&lt;br /&gt;
There are other truncated jails, they are generally noted in a the file on the root system: /root/truncated&lt;br /&gt;
&lt;br /&gt;
The act of moving a truncated vn to another system un-does the truncating- the truncated vn is filled with 0’s and it occupies physical disk space for which it’s configured. So, you should use dumpremote to preserve the truncation.&lt;br /&gt;
&lt;br /&gt;
* if you are getting disk full messages for a BSD customer, it&#039;s fairly safe to clear out their /usr/ports/distfiles dir&lt;br /&gt;
* 4.x: ps and top an only be run by root in these jails.  Done on purpose:&lt;br /&gt;
As for `ps` and `top` - non-root users can run them, just not successfully because we have locked the permissions on /dev/mem and /dev/kmem to be root-readable only.  That is why non-root users cannot successfully run `ps` and `top`.&lt;br /&gt;
* user quotas do not work on freebsd jails - you cannot set up quotas at all, and that&#039;s that.&lt;br /&gt;
* You cannot inject a process into a 4.x jail, only HUP running processes. 6.x and onward you can with jexec&lt;br /&gt;
* jails see base machine’s uptime/load when running top/w&lt;br /&gt;
* if someone is unable to get in- cant ping, etc- see if they were blocked by castle (DoS), see if their ip is on the system (post reboot it was lost cause wasn’t in the rc.conf). preboot should catch that.&lt;br /&gt;
* in FreeBSD you cant su to root unless you belong to wheel group – so if you remove your acct and setup a new one, we have to add it to wheel group (add to /etc/group)&lt;br /&gt;
* Dmesg from underlying sys is seen in customers dmesg on jail&lt;br /&gt;
* Popper process going crazy @ 40% for 10min = someone who leaves mail on server&lt;br /&gt;
* Don’t force umounts on 4.x jails – it crashes the machine, generally ok on newer machines (running md)&lt;br /&gt;
* Good book for admin http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=t824VyRAYz&amp;amp;isbn=0596005164&amp;amp;itm=2&lt;br /&gt;
* Self-sign ssl cert http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcert&lt;br /&gt;
* conversation with Glenn about semaphores and pgsql&lt;br /&gt;
&amp;lt;pre&amp;gt;SDBoody: hey, these are valid amounts/figures/increments right:&lt;br /&gt;
 kern.ipc.semmni=1280&lt;br /&gt;
 kern.ipc.semmns=1280&lt;br /&gt;
 &lt;br /&gt;
gr8feen: probably... I always forget exactly what those are, so I usually have to look them up&lt;br /&gt;
SDBoody: semaphores&lt;br /&gt;
SDBoody: i took the current 1024 and added 256 to them&lt;br /&gt;
SDBoody: need more for pgsql&lt;br /&gt;
gr8feen: I meant the mni and mns parts...&lt;br /&gt;
gr8feen: some of those are not ones you just want to add to...   hang on a sec and I&#039;ll look them up..&lt;br /&gt;
gr8feen: what&#039;s semmsl set to?&lt;br /&gt;
SDBoody: kern.ipc.semmsl: 1024&lt;br /&gt;
 &lt;br /&gt;
SDBoody: kern.ipc.msgseg: 2048 &lt;br /&gt;
 kern.ipc.msgssz: 8 &lt;br /&gt;
 kern.ipc.msgtql: 40 &lt;br /&gt;
 kern.ipc.msgmnb: 2048 &lt;br /&gt;
 kern.ipc.msgmni: 40 &lt;br /&gt;
 kern.ipc.msgmax: 16384 &lt;br /&gt;
 kern.ipc.semaem: 16384 &lt;br /&gt;
 kern.ipc.semvmx: 65534 &lt;br /&gt;
 kern.ipc.semusz: 152 &lt;br /&gt;
 kern.ipc.semume: 10 &lt;br /&gt;
 kern.ipc.semopm: 100 &lt;br /&gt;
 kern.ipc.semmsl: 1024 &lt;br /&gt;
 kern.ipc.semmnu: 512 &lt;br /&gt;
 kern.ipc.semmns: 1024 &lt;br /&gt;
 kern.ipc.semmni: 1024 &lt;br /&gt;
 kern.ipc.semmap: 768 &lt;br /&gt;
 kern.ipc.shm_allow_removed: 0 &lt;br /&gt;
 kern.ipc.shm_use_phys: 1 &lt;br /&gt;
 kern.ipc.shmall: 262144 &lt;br /&gt;
 kern.ipc.shmseg: 256 &lt;br /&gt;
 kern.ipc.shmmni: 784 &lt;br /&gt;
 kern.ipc.shmmin: 1 &lt;br /&gt;
 kern.ipc.shmmax: 536870912 &lt;br /&gt;
 kern.ipc.maxsockets: 25600 &lt;br /&gt;
 &lt;br /&gt;
gr8feen: ok...msl is max per id, mni is max ids, mns is max number of semaphores... so you probably want something like mns = mni * msl&lt;br /&gt;
gr8feen: which one did you run out of?&lt;br /&gt;
SDBoody: not sure how to tell- ipcs shows the sems in use add up to 1024&lt;br /&gt;
SDBoody: there are 59 entries&lt;br /&gt;
gr8feen: I&#039;m assuming you tried to start postgres and it failed?&lt;br /&gt;
SDBoody: yes&lt;br /&gt;
gr8feen: it should have logged why, somewhere..&lt;br /&gt;
gr8feen: if I recall, it&#039;ll tell you which one it ran out of&lt;br /&gt;
SDBoody: &amp;gt; DETAIL:  Failed system call was semget(1, 17, 03600).&lt;br /&gt;
 &lt;br /&gt;
gr8feen: so it wanted an id with 17 semaphores...I&#039;d start by making mns = 17*mni and leave mni and mnl set to what they are now and see what it does&lt;br /&gt;
SDBoody: i think mni is plenty high&lt;br /&gt;
SDBoody: ok, more reasonable 17408&lt;br /&gt;
gr8feen: yeah...just change that one and see how it goes..&lt;br /&gt;
SDBoody: and leave mni alone at 1024?&lt;br /&gt;
gr8feen: yeah...mni id the max number of ids...but if it&#039;s trying to get something like 17 per id, your going to hit mns before you hit anything else&lt;br /&gt;
SDBoody: right, but doesn&#039;t hurt to have it that high (assuming)&lt;br /&gt;
&lt;br /&gt;
gr8feen: not really...  I think those get allocated out of ram that you cant page out, but it&#039;s still such a small amount that it really doesn&#039;t matter&lt;br /&gt;
SDBoody: looks like that worked, thx!&lt;br /&gt;
gr8feen: cool&lt;br /&gt;
&lt;br /&gt;
see semaphores: ipcs -a -s&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= FreeBSD VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
These files are located in /usr/local/jail/rc.d and /usr/local/jail/bin&lt;br /&gt;
&lt;br /&gt;
== jailmake ==&lt;br /&gt;
&lt;br /&gt;
Applies to 7.x+ &lt;br /&gt;
On older systems syntax differs, run jailmake once to see.&lt;br /&gt;
&lt;br /&gt;
Note: this procedure differs on mx2 which is 7.x but still uses gvinum&lt;br /&gt;
&lt;br /&gt;
#	run js to figure out which md’s are in use, which disk has enough space, IP to put it on&lt;br /&gt;
#	use col00xxx for both hostnames if they don’t give you a hostname&lt;br /&gt;
#	copy over dir, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;Usage: jailmake IP[,IP] CID disk[1|2|3] md# hostname shorthost ipfw# email [size in GB]&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ex: &lt;br /&gt;
&lt;br /&gt;
 Jail2# jailmake 69.55.234.66 col01334 3 97 vps.bsd.it vps 1334 fb@bsd.it&lt;br /&gt;
&lt;br /&gt;
== jailps ==&lt;br /&gt;
 jailps [hostname]&lt;br /&gt;
DEPRECATED FOR jps: displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname of the jail you wish to query. If you don’t &lt;br /&gt;
supply an argument, all processes on the machine are listed and grouped by jail. &lt;br /&gt;
&lt;br /&gt;
== jps ==&lt;br /&gt;
 jps [hostname]&lt;br /&gt;
displays processes belonging to/running inside a jail. The command&lt;br /&gt;
takes one (optional) argument – the hostname or ID of the jail you wish to query. &lt;br /&gt;
&lt;br /&gt;
== jailkill ==&lt;br /&gt;
 jailkill &amp;lt;hostname&amp;gt;&lt;br /&gt;
stops all process running in a jail.&lt;br /&gt;
&lt;br /&gt;
You can also run:&lt;br /&gt;
 jailkill &amp;lt;JID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== problems ===&lt;br /&gt;
Occasionally you will hit an issue where jail will not kill off:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailkill www.domain.com&lt;br /&gt;
www.domain.com .. killed: none&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
Because no processes are running under that hostname.  You cannot use jailps.pl either:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9# jailps www.domain.com&lt;br /&gt;
www.domain.com doesn’t exist on this server&lt;br /&gt;
jail9#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reasons for this are usually:&lt;br /&gt;
* the jail is no longer running&lt;br /&gt;
&lt;br /&gt;
* the jail&#039;s hostname has changed&lt;br /&gt;
In this case, &lt;br /&gt;
&lt;br /&gt;
&amp;gt;=6.x: run a &amp;lt;tt&amp;gt;jls|grep &amp;lt;jail&#039;s IP&amp;gt;&amp;lt;/tt&amp;gt; to find the correct hostname, then update the quad file, then kill the jail.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;6.x: the first step is to cat their /etc/rc.conf file to see if you can tell what they set the new hostname to.  This very often works.  For example:&lt;br /&gt;
&lt;br /&gt;
 cat /mnt/data2/198.78.65.136-col00261-DIR/etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
But maybe they set the hostname with the hostname command, and the original hostname is still in /etc/rc.conf.&lt;br /&gt;
&lt;br /&gt;
The welcome email clearly states that they should tell us if they change their hostname, so there is no problem in just emailing them and asking them what they set the new hostname to.&lt;br /&gt;
&lt;br /&gt;
Once you know the new hostname OR if a customer simply emails to inform you that they have set the hostname to something different, you need to edit the quad and safe files that their system is in to input the new hostname.&lt;br /&gt;
&lt;br /&gt;
However, if push comes to shove and you cannot find out the hostname from them or from their system, then you need to start doing some detective work.&lt;br /&gt;
&lt;br /&gt;
The easiest thing to do is run jailps looking for a hostname similar to their original hostname. Or you could get into the /bin/sh shell by running:&lt;br /&gt;
&lt;br /&gt;
 /bin/sh&lt;br /&gt;
&lt;br /&gt;
and then looking at every hostname of every process:&lt;br /&gt;
&lt;br /&gt;
 for f in `ls /proc` ; do cat /proc/$f/status ; done&lt;br /&gt;
&lt;br /&gt;
and scanning for a hostname that is either similar to their original hostname, or that you don&#039;t see in any of the quad safe files.&lt;br /&gt;
&lt;br /&gt;
This is very brute force though, and it is possible that catting every file in /proc is dangerous - I don&#039;t recommend it.  A better thing would be to identify any processes that you know belong to this system – perhaps the reason you are trying to find this system is because they are running something bad - and just catting the status from only that PID.&lt;br /&gt;
&lt;br /&gt;
Somewhere there’s a jail where there may be 2 systems named www.  Look at /etc/rc.conf and make sure they’re both really www. If they are, jailkill www, jailps www to make sure not running.  Then immediately restart the other one, as the fqdn (as found from a rev nslookup)&lt;br /&gt;
&lt;br /&gt;
* on &amp;gt;=6.x the hostname may not yet be hashed:&lt;br /&gt;
&amp;lt;pre&amp;gt;jail9 /# jls&lt;br /&gt;
 JID Hostname                    Path                                  IP Address(es)&lt;br /&gt;
   1 bitnet.dgate.org            /mnt/data1/69.55.232.50-col02094-DIR  69.55.232.50&lt;br /&gt;
   2 ns3.hctc.net                /mnt/data1/69.55.234.52-col01925-DIR  69.55.234.52&lt;br /&gt;
   3 bsd1                        /mnt/data1/69.55.232.44-col00155-DIR  69.55.232.44&lt;br /&gt;
   4 let2.bbag.org               /mnt/data1/69.55.230.92-col00202-DIR  69.55.230.92&lt;br /&gt;
   5 post.org                    /mnt/data2/69.55.232.51-col02095-DIR  69.55.232.51 ...&lt;br /&gt;
   6 ns2                         /mnt/data1/69.55.232.47-col01506-DIR  69.55.232.47 ...&lt;br /&gt;
   7 arlen.server.net            /mnt/data1/69.55.232.52-col01171-DIR  69.55.232.52&lt;br /&gt;
   8 deskfood.com                /mnt/data1/69.55.232.71-col00419-DIR  69.55.232.71&lt;br /&gt;
   9 mirage.confluentforms.com   /mnt/data1/69.55.232.54-col02105-DIR  69.55.232.54 ...&lt;br /&gt;
  10 beachmember.com             /mnt/data1/69.55.232.59-col02107-DIR  69.55.232.59&lt;br /&gt;
  11 www.agottem.com             /mnt/data1/69.55.232.60-col02109-DIR  69.55.232.60&lt;br /&gt;
  12 sdhobbit.myglance.org       /mnt/data1/69.55.236.82-col01708-DIR  69.55.236.82&lt;br /&gt;
  13 ns1.jnielsen.net            /mnt/data1/69.55.234.48-col00204-DIR  69.55.234.48 ...&lt;br /&gt;
  14 ymt.rollingegg.net          /mnt/data2/69.55.236.71-col01678-DIR  69.55.236.71&lt;br /&gt;
  15 verse.unixlore.net          /mnt/data1/69.55.232.58-col02131-DIR  69.55.232.58&lt;br /&gt;
  16 smcc-mail.org               /mnt/data2/69.55.232.68-col02144-DIR  69.55.232.68&lt;br /&gt;
  17 kasoutsuki.w4jdh.net        /mnt/data2/69.55.232.46-col02147-DIR  69.55.232.46&lt;br /&gt;
  18 dili.thium.net              /mnt/data2/69.55.232.80-col01901-DIR  69.55.232.80&lt;br /&gt;
  20 www.tekmarsis.com           /mnt/data2/69.55.232.66-col02155-DIR  69.55.232.66&lt;br /&gt;
  21 vps.yoxel.net               /mnt/data2/69.55.236.67-col01673-DIR  69.55.236.67&lt;br /&gt;
  22 smitty.twitalertz.com       /mnt/data2/69.55.232.84-col02153-DIR  69.55.232.84&lt;br /&gt;
  23 deliver4.klatha.com         /mnt/data2/69.55.232.67-col02160-DIR  69.55.232.67&lt;br /&gt;
  24 nideffer.com                /mnt/data2/69.55.232.65-col00412-DIR  69.55.232.65&lt;br /&gt;
  25 usa.hanyuan.com             /mnt/data2/69.55.232.57-col02163-DIR  69.55.232.57&lt;br /&gt;
  26 daifuku.ppbh.com            /mnt/data2/69.55.236.91-col01720-DIR  69.55.236.91&lt;br /&gt;
  27 collins.greencape.net       /mnt/data2/69.55.232.83-col01294-DIR  69.55.232.83&lt;br /&gt;
  28 ragebox.com                 /mnt/data2/69.55.230.104-col01278-DIR 69.55.230.104&lt;br /&gt;
  29 outside.mt.net              /mnt/data2/69.55.232.72-col02166-DIR  69.55.232.72&lt;br /&gt;
  30 vps.payneful.ca             /mnt/data2/69.55.234.98-col01999-DIR  69.55.234.98&lt;br /&gt;
  31 higgins                     /mnt/data2/69.55.232.87-col02165-DIR  69.55.232.87 ...&lt;br /&gt;
  32 ozymandius                  /mnt/data2/69.55.228.96-col01233-DIR  69.55.228.96&lt;br /&gt;
  33 trusted.realtors.org        /mnt/data2/69.55.238.72-col02170-DIR  69.55.238.72&lt;br /&gt;
  34 jc1.flanderous.com          /mnt/data2/69.55.239.22-col01504-DIR  69.55.239.22&lt;br /&gt;
  36 guppylog.com                /mnt/data2/69.55.238.73-col00036-DIR  69.55.238.73&lt;br /&gt;
  40 haliohost.com               /mnt/data2/69.55.234.41-col01916-DIR  69.55.234.41 ...&lt;br /&gt;
  41 satyr.jorge.cc              /mnt/data1/69.55.232.70-col01963-DIR  69.55.232.70&lt;br /&gt;
jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
ERROR: jail_: jail &amp;quot;satyr,jorge,cc&amp;quot; not found&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note how it&#039;s saying &amp;lt;tt&amp;gt;satyr,jorge,cc&amp;lt;/tt&amp;gt; is not found, and not &amp;lt;tt&amp;gt;satyr.jorge.cc&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The jail subsystem tracks things using comma-delimited hostnames. That is created every few hours:&lt;br /&gt;
&lt;br /&gt;
 jail9 /# crontab -l&lt;br /&gt;
 0 0,6,12,18 * * * /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
So if we run this manually:&lt;br /&gt;
 jail9 /# /usr/local/jail/bin/sync_jail_names&lt;br /&gt;
&lt;br /&gt;
Then kill the jail:&lt;br /&gt;
 jail9 /# jailkill satyr.jorge.cc&lt;br /&gt;
 successfully killed: satyr,jorge,cc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It worked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you ever see this when trying to kill a jail:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# jailkill e-scribe.com&lt;br /&gt;
killing JID: 6 hostname: e-scribe.com&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
3 procs running&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;[[#jailkill|jailkill]]&amp;lt;/tt&amp;gt; probably got lost trying to kill off the jail. Just ctrl-c the jailkill process, then run a jailps on the hostname, and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; any process which is still running. Keep running jailps and &amp;lt;tt&amp;gt;kill -9&amp;lt;/tt&amp;gt; till all processes are gone.&lt;br /&gt;
&lt;br /&gt;
== jailpsall ==&lt;br /&gt;
 jailpsall&lt;br /&gt;
will run a jailps on all jails configured in the quad files (this is different from&lt;br /&gt;
jailps with no arguments as it won’t help you find a “hidden” system)&lt;br /&gt;
&lt;br /&gt;
== jailpsw ==&lt;br /&gt;
 jailpsw&lt;br /&gt;
will run a jailps with an extra -w to provide wider output&lt;br /&gt;
&lt;br /&gt;
== jt (&amp;gt;=7.x) ==&lt;br /&gt;
 jt&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;gt;=7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
a wrapper for top displaying processes on the server and which jail owns them. Constantly updates, like top. &lt;br /&gt;
&lt;br /&gt;
== jtop (&amp;lt;7.x) ==&lt;br /&gt;
 jtop&lt;br /&gt;
displays the top 20 processes on the server (the top 20 processes from top) and &lt;br /&gt;
which jail owns them. This is very helpful for determining who is doing what when&lt;br /&gt;
the server is very busy.&lt;br /&gt;
&lt;br /&gt;
== stopjail ==&lt;br /&gt;
 stopjail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will jailkill, umount and vnconfig –u a jail. If passed an optional 2nd&lt;br /&gt;
argument, it will not exit before umounting and un-vnconfig’ing in the event&lt;br /&gt;
jailkill returns no processes killed. This is useful if you just want to umount&lt;br /&gt;
and vnconfig –u a jail you’ve already killed. It is intelligent in that it won’t &lt;br /&gt;
try to umount or vnconfig –u if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== startjail ==&lt;br /&gt;
 startjail &amp;lt;hostname&amp;gt;&lt;br /&gt;
this will start vnconfig, mount (including linprocfs and null-mounts), and start a jail.&lt;br /&gt;
Essentially, it reads the jail’s relevant block from the right quad file and executes it.&lt;br /&gt;
It is intelligent in that it won’t try to mount or vnconfig if it’s not necessary.&lt;br /&gt;
&lt;br /&gt;
== jpid ==&lt;br /&gt;
 jpid &amp;lt;pid&amp;gt;&lt;br /&gt;
displays information about a process – including which jail owns it.&lt;br /&gt;
It’s the equivalent of running cat /proc/&amp;lt;pid&amp;gt;/status&lt;br /&gt;
&lt;br /&gt;
== canceljail ==&lt;br /&gt;
 canceljail &amp;lt;hostname&amp;gt; [1]&lt;br /&gt;
this will stop a jail (the equivalent of stopjail), check for backups (offer to remove them &lt;br /&gt;
from the backup server and the backup.config), rename the vnfile, remove the dir, and &lt;br /&gt;
edit quad/safe. If passed an optional 2nd argument, it will not exit upon failing to kill&lt;br /&gt;
and processes owned by the jail. This is useful if you just want to cancel a jail which &lt;br /&gt;
is already stopped.&lt;br /&gt;
&lt;br /&gt;
== jls ==&lt;br /&gt;
 jls [-v]&lt;br /&gt;
Lists all jails running:&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF IP Address      Hostname                     Path&lt;br /&gt;
 101  135 69.55.224.148   mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#REF is the number of references or procs(?) running&lt;br /&gt;
&lt;br /&gt;
Running with -v will give you all IPs assigned to each jail (7.2 up)&lt;br /&gt;
&amp;lt;pre&amp;gt;JID #REF Hostname                     Path                                  IP Address(es)&lt;br /&gt;
 101  139 mail.pc9.org                 /mnt/data2/69.55.224.148-col01034-DIR 69.55.224.14869.55.234.85&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== startalljails ==&lt;br /&gt;
 startalljails&lt;br /&gt;
7.2+ only. This will parse through quad1 and start all jails. It utilizes lockfiles so it won’t try to start a jail more than once- therefore multiple instances can be running in parallel without fear of starting a jail twice. If a jail startup gets stuck, you can ^C without fear of killing the script. IMPORTANT- before running startalljails you should make sure you ran preboot once as it will clear out all the lockfiles and enable startalljails to work properly.&lt;br /&gt;
&lt;br /&gt;
== aaccheck.sh ==&lt;br /&gt;
 aaccheck.sh&lt;br /&gt;
displayes the output of container list and task list from aaccli&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update jail scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== buildsafe ==&lt;br /&gt;
 buildsafe&lt;br /&gt;
creates safe files based on quads (automatically removing the fsck’s). This will destructively overwrite safe files&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a jail when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== checkprio.pl ==&lt;br /&gt;
 checkprio.pl&lt;br /&gt;
will look for any process (other than the current shell’s csh, sh, sshd procs) with a non-normal priority and normalize it&lt;br /&gt;
&lt;br /&gt;
== diskusagemon == &lt;br /&gt;
 diskusagemon &amp;lt;mount point&amp;gt; &amp;lt;1k blocks&amp;gt;&lt;br /&gt;
watches a mount point’s disk use, when it reaches the level specified in the 2nd argument,&lt;br /&gt;
it exits. This is useful when doing a restore and you want to be paged as it’s nearing completion.&lt;br /&gt;
Best used as: &amp;lt;tt&amp;gt;diskusagemon /asd/asd 1234; pagexxx&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumprestore ==&lt;br /&gt;
 dumprestore &amp;lt;dumpfile&amp;gt;&lt;br /&gt;
this is a perl expect script which automatically enters ‘1’ and ‘y’. It seems to cause restore to fail&lt;br /&gt;
to set owner permissions on large restores.&lt;br /&gt;
&lt;br /&gt;
== g ==&lt;br /&gt;
 g &amp;lt;search&amp;gt;&lt;br /&gt;
greps the quad/safe files for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gather.pl ==&lt;br /&gt;
 gather.pl&lt;br /&gt;
gathers up data about jails configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== ipfwbackup ==&lt;br /&gt;
 ipfwbackup&lt;br /&gt;
writes ipfw traffic count data to a logfile&lt;br /&gt;
&lt;br /&gt;
== ipfwreset ==&lt;br /&gt;
 ipfwreset&lt;br /&gt;
writes ipfw traffic count data to a logfile and resets counters to 0&lt;br /&gt;
&lt;br /&gt;
== js ==&lt;br /&gt;
 js&lt;br /&gt;
output varies by OS version, but generally provides information about the base jail:&lt;br /&gt;
- which vn’s are in use&lt;br /&gt;
- disk usage&lt;br /&gt;
- info about the contents of quads&lt;br /&gt;
- the # of inodes represented by the jails contained in the group (133.2 in the example below), and how many jails per data mount, as well as subtotals&lt;br /&gt;
- ips bound to the base machine but not in use by a jail&lt;br /&gt;
- free gvinum volumes, or unused vn’s or used md’s&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/local/jail/rc.d/quad1:&lt;br /&gt;
        /mnt/data1 133.2 (1)&lt;br /&gt;
        /mnt/data2 1040.5 (7)&lt;br /&gt;
        total 1173.7 (8)&lt;br /&gt;
/usr/local/jail/rc.d/quad2:&lt;br /&gt;
        /mnt/data1 983.4 (6)&lt;br /&gt;
        total 983.4 (6)&lt;br /&gt;
/usr/local/jail/rc.d/quad3:&lt;br /&gt;
        /mnt/data1 693.4 (4)&lt;br /&gt;
        /mnt/data2 371.6 (3)&lt;br /&gt;
        total 1065 (7)&lt;br /&gt;
/usr/local/jail/rc.d/quad4:&lt;br /&gt;
        /mnt/data1 466.6 (3)&lt;br /&gt;
        /mnt/data2 882.2 (5)&lt;br /&gt;
        total 1348.8 (8)&lt;br /&gt;
/mnt/data1: 2276.6 (14)&lt;br /&gt;
/mnt/data2: 2294.3 (15)&lt;br /&gt;
&lt;br /&gt;
Available IPs:&lt;br /&gt;
69.55.230.11 69.55.230.13 69.55.228.200&lt;br /&gt;
&lt;br /&gt;
Available volumes:&lt;br /&gt;
v78 /mnt/data2 2G&lt;br /&gt;
v79 /mnt/data2 2G&lt;br /&gt;
v80 /mnt/data2 2G&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== makevirginjail ==&lt;br /&gt;
 makevirginjail&lt;br /&gt;
Only on some systems, makes an empty jail (doesn&#039;t do restore step)&lt;br /&gt;
&lt;br /&gt;
== mb == &lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== notify.sh ==&lt;br /&gt;
 notify.sh&lt;br /&gt;
emails reboot@johncompanies.com – intended to be called at boot time to alert us to a machine which panics and reboots and isn’t caught by bb or castle.&lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatch ==&lt;br /&gt;
 orphanedbackupwatch&lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to delete them&lt;br /&gt;
&lt;br /&gt;
== postboot ==&lt;br /&gt;
 postboot&lt;br /&gt;
to be run after a machine reboot and quad/safe’s are done executing. It will:&lt;br /&gt;
* do chmod 666 on each jail’s /dev/null&lt;br /&gt;
* add ipfw counts&lt;br /&gt;
* run jailpsall (so you can see if a configured jail isn’t running)&lt;br /&gt;
&lt;br /&gt;
== preboot ==&lt;br /&gt;
 preboot&lt;br /&gt;
to be run before running quad/safe – checks for misconfigurations: &lt;br /&gt;
* a jail configured in a quad but not a safe&lt;br /&gt;
* a jail is listed more than once in a quad&lt;br /&gt;
* the ip assigned to a jail isn’t configured on the machine&lt;br /&gt;
* alias numbering skips in the rc.conf (resulting in the above)&lt;br /&gt;
* orphaned vnfile&#039;s that aren&#039;t mentioned in a quad/safe&lt;br /&gt;
* ip mismatches between dir/vnfile name and the jail’s ip&lt;br /&gt;
* dir/vnfiles&#039;s in quad/safe that don’t exist &lt;br /&gt;
&lt;br /&gt;
== quadanalyze.pl ==&lt;br /&gt;
 quadanalyze.pl&lt;br /&gt;
called by js, produces the info (seen above with js explanation) about the contents of quad (inode count, # of jails, etc.)&lt;br /&gt;
&lt;br /&gt;
== rsync.backup ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== taskdone ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was executed as the subject&lt;br /&gt;
&lt;br /&gt;
== topten ==&lt;br /&gt;
 topten&lt;br /&gt;
summarizes the top 10 traffic users (called by ipfwreset)&lt;br /&gt;
&lt;br /&gt;
== trafficgather.pl ==&lt;br /&gt;
 trafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by jail to support@johncomapnies.com and payments@johncompanies.com. Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on traffic logs created by ipfwreset and ipfwbackup&lt;br /&gt;
&lt;br /&gt;
== trafficwatch.pl ==&lt;br /&gt;
 trafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a jail reaches the warning level (35G) and the limit (40G). We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== trafstats ==&lt;br /&gt;
 trafstats&lt;br /&gt;
writes ipfw traffic usage info by jail to a file called jc_traffic_dump in each jail’s / dir&lt;br /&gt;
&lt;br /&gt;
== truncate_jailmake ==&lt;br /&gt;
 truncate_jailmake&lt;br /&gt;
a version of jailmake which creates truncated vnfiles.&lt;br /&gt;
&lt;br /&gt;
== vb ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/bin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs (freebsd) ==&lt;br /&gt;
 vs&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/jail/rc.d/safe&amp;lt;n&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
vq&amp;lt;n&amp;gt;&lt;br /&gt;
the equivalent of: vi /usr/local/jail/rc.d/quad&amp;lt;n&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dumpremote ==&lt;br /&gt;
 dumpremote &amp;lt;user@machine&amp;gt; &amp;lt;/remote/location/file-dump&amp;gt; &amp;lt;vnX&amp;gt;&lt;br /&gt;
ex: dumpremote user@10.1.4.117 /mnt/data3/remote.echoditto.com-dump 7&lt;br /&gt;
this will dump a vn filesystem to a remote machine and location&lt;br /&gt;
&lt;br /&gt;
== oversellcheck ==&lt;br /&gt;
 oversellcheck&lt;br /&gt;
displays how much a disk is oversold or undersold taking into account truncated vn files. Only for use on 4.x systems&lt;br /&gt;
&lt;br /&gt;
== mvbackups (freebsd) ==&lt;br /&gt;
 mvbackups &amp;lt;dir&amp;gt; (1.1.1.1-col00001-DIR) &amp;lt;target_machine&amp;gt; (jail1) &amp;lt;target_dir&amp;gt; (data1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== jailnice ==&lt;br /&gt;
 jailnice &amp;lt;hostname&amp;gt;&lt;br /&gt;
applies &amp;lt;tt&amp;gt;renice 19 [PID]&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;rtprio 31 –[PID]&amp;lt;/tt&amp;gt; to each process in the given jail&lt;br /&gt;
&lt;br /&gt;
== dumpremoterestore ==&lt;br /&gt;
 dumpremoterestore &amp;lt;device&amp;gt; &amp;lt;ip of target machine&amp;gt; &amp;lt;dir on target machine&amp;gt;&lt;br /&gt;
ex: dumpremoterestore /dev/vn51 10.1.4.118 /mnt/data2/69.55.239.45-col00688-DIR&lt;br /&gt;
dumps a device and restores it to a directory on a remote machine. Requires that you enable root ssh on the &lt;br /&gt;
remote machine.&lt;br /&gt;
&lt;br /&gt;
== psj ==&lt;br /&gt;
 psj&lt;br /&gt;
shows just the procs running on the base system – a ps auxw but without jail’d procs present&lt;br /&gt;
&lt;br /&gt;
== perc5iraidchk ==&lt;br /&gt;
 perc5iraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2950 systems with Perc5/6 controllers&lt;br /&gt;
&lt;br /&gt;
== perc4eraidchk ==&lt;br /&gt;
 perc4eraidchk&lt;br /&gt;
checks for degraded arrays on Dell 2850 systems with Perc4e/Di controllers&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS =&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vm) ==&lt;br /&gt;
&lt;br /&gt;
This applies only to new virts &amp;gt;= 4.x&lt;br /&gt;
&lt;br /&gt;
grab ip from ipmap (if opened from the pending cust screen it should take you to the right block). You can also run vzlist -a to see what block is in use, generally. Try to find an IP that&#039;s in the same block of class C IP&#039;s already on the box.&lt;br /&gt;
&lt;br /&gt;
1. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
  also, ping the IP address.&lt;br /&gt;
&lt;br /&gt;
2. put CT on whichever partition has more space&lt;br /&gt;
  df -h&lt;br /&gt;
&lt;br /&gt;
3.  vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; ) &lt;br /&gt;
 vm col00009 69.55.230.238 centos.testdave.com /vz1 dsmith@johncompanies.com centos-6-x86_64 LM&lt;br /&gt;
&lt;br /&gt;
4. copy veid, dir, ip and password to pending customer screen. activate customer&lt;br /&gt;
&lt;br /&gt;
== Making new customer VE (vemakexxx) ==&lt;br /&gt;
&lt;br /&gt;
This applies to older virts with old templates. This should probably not be used at all anymore.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. look thru hist for ip&lt;br /&gt;
&lt;br /&gt;
2. confirm ip you want to use isn’t in use via Mgmt. -&amp;gt; IP Map in management screens&lt;br /&gt;
&lt;br /&gt;
3. put ve on whichever partition has more space&lt;br /&gt;
 vemakerh9 &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; &amp;lt;hostname&amp;gt; &amp;lt;mount&amp;gt; &amp;lt;email&amp;gt; [gb disk]; &amp;lt;256|384|512&amp;gt; &amp;lt;veid&amp;gt;&lt;br /&gt;
 vemakerh9 866 69.55.226.109 ngentu.com /vz1 ayo@ngantu.com,asd@asd.com 5; 256 866&lt;br /&gt;
&lt;br /&gt;
4. copy (veid), dir, and ip to pending customer screen (pass set to p455agfa)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: We use VEID (Virtual Environment ID) and CTID (Container ID) interchangably. Similarly, VE and CT. They mean the same thing.&lt;br /&gt;
VZPP = VirtuoZzo Power Panel (the control panel for each CT)&lt;br /&gt;
&lt;br /&gt;
All linux systems exist in /vz, /vz1 or /vz2 - since each linux machine holds roughly 60-90 customers, there will be roughly 30-45 in each partition.&lt;br /&gt;
&lt;br /&gt;
The actual filesystem of the system in question is in:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/private/(VEID)&lt;br /&gt;
&lt;br /&gt;
Where VEID is the identifier for that system - an all-numeric string larger than 100.&lt;br /&gt;
&lt;br /&gt;
The actual mounted and running systems are in the corresponding:&lt;br /&gt;
&lt;br /&gt;
 /vz(1-2)/root/(VEID)&lt;br /&gt;
&lt;br /&gt;
But we rarely interact with any system from this mount point.&lt;br /&gt;
&lt;br /&gt;
You should never need to touch the root portion of their system – however you can traverse their filesystem by going to &amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/root&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;/vz(1-2)/private/(VEID)/fs/root&amp;lt;/tt&amp;gt; on 4.x systems) the root of their filesystem is in that directory, and their entire system is underneath that.&lt;br /&gt;
&lt;br /&gt;
Every VE has a startup script in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts&amp;lt;/tt&amp;gt;  (which is symlinked as &amp;lt;tt&amp;gt;/vzconf&amp;lt;/tt&amp;gt; on all systems) - the VE startup script is simply named &amp;lt;tt&amp;gt;(VEID).conf&amp;lt;/tt&amp;gt; - it contains all the system parameters for that VE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Configuration file generated by vzsplit for 60 VE&lt;br /&gt;
# on HN with total amount of physical mem 2011 Mb&lt;br /&gt;
&lt;br /&gt;
VERSION=&amp;quot;2&amp;quot;&lt;br /&gt;
CLASSID=&amp;quot;2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ONBOOT=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
KMEMSIZE=&amp;quot;8100000:8200000&amp;quot;&lt;br /&gt;
LOCKEDPAGES=&amp;quot;322:322&amp;quot;&lt;br /&gt;
PRIVVMPAGES=&amp;quot;610000:615000&amp;quot;&lt;br /&gt;
SHMPAGES=&amp;quot;33000:34500&amp;quot;&lt;br /&gt;
NUMPROC=&amp;quot;410:415&amp;quot;&lt;br /&gt;
PHYSPAGES=&amp;quot;0:2147483647&amp;quot;&lt;br /&gt;
VMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
OOMGUARPAGES=&amp;quot;13019:2147483647&amp;quot;&lt;br /&gt;
NUMTCPSOCK=&amp;quot;1210:1215&amp;quot;&lt;br /&gt;
NUMFLOCK=&amp;quot;107:117&amp;quot;&lt;br /&gt;
NUMPTY=&amp;quot;19:19&amp;quot;&lt;br /&gt;
NUMSIGINFO=&amp;quot;274:274&amp;quot;&lt;br /&gt;
TCPSNDBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
TCPRCVBUF=&amp;quot;1800000:1900000&amp;quot;&lt;br /&gt;
OTHERSOCKBUF=&amp;quot;900000:950000&amp;quot;&lt;br /&gt;
DGRAMRCVBUF=&amp;quot;200000:200000&amp;quot;&lt;br /&gt;
NUMOTHERSOCK=&amp;quot;650:660&amp;quot;&lt;br /&gt;
DCACHE=&amp;quot;786432:818029&amp;quot;&lt;br /&gt;
NUMFILE=&amp;quot;7500:7600&amp;quot;&lt;br /&gt;
AVNUMPROC=&amp;quot;51:51&amp;quot;&lt;br /&gt;
IPTENTRIES=&amp;quot;155:155&amp;quot;&lt;br /&gt;
DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
DISKINODES=&amp;quot;400000:420000&amp;quot;&lt;br /&gt;
CPUUNITS=&amp;quot;1412&amp;quot;&lt;br /&gt;
QUOTAUGIDLIMIT=&amp;quot;2000&amp;quot;&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/636&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/636&amp;quot;&lt;br /&gt;
NAMESERVER=&amp;quot;69.55.225.225 69.55.230.3&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;vzredhat-7.3/20030305&amp;quot;&lt;br /&gt;
VE_TYPE=&amp;quot;regular&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.225.229&amp;quot;&lt;br /&gt;
HOSTNAME=&amp;quot;textengine.net&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As you can see, the hostname is set here, the disk space is set here, the number of inodes, the number of files that can be open, the number of tcp sockets, etc. - all are set here.&lt;br /&gt;
&lt;br /&gt;
In fact, everything that can be set on this customer system is set in this conf file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All interaction with the customer system is done with the VEID.  You start the system by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl start 999&lt;br /&gt;
&lt;br /&gt;
You stop it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl stop 999&lt;br /&gt;
&lt;br /&gt;
You execute commands in it by running:&lt;br /&gt;
&lt;br /&gt;
 vzctl exec 999 df -k&lt;br /&gt;
&lt;br /&gt;
You enter into it, via a root-shell backdoor with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and you set parameters for the system, while it is still running, with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; is the most commonly used command - we have aliased &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; since we use it so often. We’ll continue to use &amp;lt;tt&amp;gt;vzctl&amp;lt;/tt&amp;gt; in our examples, but feel free to use just &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say the user wants more diskspace.  You can cat their conf file and see:&lt;br /&gt;
&lt;br /&gt;
 DISKSPACE=&amp;quot;4194304:4613734&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So right now they have 4gigs of space.  You can then change it to 6 with:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskspace 6100000:6200000 --save&lt;br /&gt;
&lt;br /&gt;
IMPORTANT:  all issuances of the vzctl set command need to end with &amp;lt;tt&amp;gt;–save&amp;lt;/tt&amp;gt; - if they don&#039;t, the setting will be set, but it will not be saved to the conf file, and they will not have those settings next time they boot.&lt;br /&gt;
&lt;br /&gt;
All of the tunables in the conf file can be set with the vzctl set command.  Note that in the conf file, and on the vzctl set command line, we always issue two numbers seperated by a colon - that is because we are setting the hard and soft limits.  Always set the hard limit slightly above the soft limit, as you see it is in the conf file for all those settings.&lt;br /&gt;
&lt;br /&gt;
There are also things you can set with `&amp;lt;tt&amp;gt;vzctl set&amp;lt;/tt&amp;gt;` that are not in the conf file as settings, per se.  For instance, you can add IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --save&lt;br /&gt;
&lt;br /&gt;
or multiple IPs:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --ipadd 10.10.10.10 --ipadd 10.10.20.30 --save&lt;br /&gt;
&lt;br /&gt;
or change the hostname:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --hostname www.example.com --save&lt;br /&gt;
&lt;br /&gt;
You can even set the nameservers:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --nameserver 198.78.66.4 --nameserver 198.78.70.180 --save&lt;br /&gt;
&lt;br /&gt;
Although you probably will never do that.&lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --disabled yes --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from being started (by VZPP or reboot) (&amp;lt;=3.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --onboot=no --save &lt;br /&gt;
&lt;br /&gt;
You can disable a VPS from using his control panel:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --offline_management=no --save &lt;br /&gt;
&lt;br /&gt;
You can suspend a VPS, so it can be resumed in the same state it was in when it was stopped (4.x):&lt;br /&gt;
&lt;br /&gt;
 vzctl suspend 999&lt;br /&gt;
&lt;br /&gt;
and to resume it:&lt;br /&gt;
&lt;br /&gt;
 vzctl resume 999&lt;br /&gt;
&lt;br /&gt;
to see who owns process:&lt;br /&gt;
 vzpid &amp;lt;PID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to mount up an unmounted ve:&lt;br /&gt;
 vzctl mount 827&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see network stats for CT&#039;s:&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One thing that sometimes comes up on older systems that we created with smaller defaults is that the system would run out of inodes.  The user will email and say they cannot create any more files or grow any files larger, but they will also say that they are not out of diskspace ... they are running:&lt;br /&gt;
&lt;br /&gt;
 df -k&lt;br /&gt;
&lt;br /&gt;
and seeing how much space is free - and they are not out of space.  They are most likely out of inodes - which they would see by running:&lt;br /&gt;
&lt;br /&gt;
 df -i&lt;br /&gt;
&lt;br /&gt;
So, the first thing you should do is enter their system with:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 999&lt;br /&gt;
&lt;br /&gt;
and run:  &amp;lt;tt&amp;gt;df -i&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
to confirm your theory.  Then exit their system.  Then simply cat their conf file and see what their inodes are set to (probably 200000:200000, since that was the old default on the older systems) and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --diskinodes 400000:400000 --save&lt;br /&gt;
&lt;br /&gt;
If they are not out of inodes, then a good possibility is that they have maxed out their numfile configuration variable, which controls how many files they can have in their system.  The current default is 7500 (which nobody has ever hit), but the old default was as low as 2000, so you would run something like:&lt;br /&gt;
&lt;br /&gt;
 vzctl set 999 --numfile 7500:7500 --save&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You cannot start or stop a VE if your pwd is its private (/vz/private/999) or root (/vz/root/999) directories, or anywhere below them.&lt;br /&gt;
&lt;br /&gt;
== Recovering from a crash (linux) ==&lt;br /&gt;
&lt;br /&gt;
=== Diagnose whether you have a crash ===&lt;br /&gt;
The most important thing is to get the machine and all ve’s back up as soon as possible. Note the time, you’ll need to create a crash log entry (Mgmt. -&amp;gt; Reference -&amp;gt; CrashLog). The first thing to do is head over to the [[Screen#Screen_Organization|serial console screen]] and see if there’s any kernel error messages output. Try to copy any messages (or just a sample of repeating messages) you see into the notes section of the crash log – these will also likely need to be sent to virtuozzo for interpretation. If the messages are spewing too fast, hit ^O + H to start a screen log dump which you can ob1182.pts-38.bb serve after the machine is rebooted. Additionally, if the  machine is responsive, you can get a trace to send to virtuozzo by hooking up a kvm and entering these 3 sequences:&lt;br /&gt;
&amp;lt;pre&amp;gt;alt+print screen+m&lt;br /&gt;
alt+print screen+p&lt;br /&gt;
alt+print screen+t&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If there are no messages, the machine may just be really busy- wait a bit (5-10min) to see if it comes back. If it&#039;s still pinging, odds are its very busy. If it doesn&#039;t come back, or the messages indicate a fatal error, you will need to proceed with a power cycle (ctrl+alt+del will not work).&lt;br /&gt;
&lt;br /&gt;
=== Power cycle the server ===&lt;br /&gt;
If this machine is not a Dell 2950 with a [[DRAC/RMM#DRAC|DRAC card]] (i.e. if you can’t ssh into the DRAC card and issue racadm serveraction hardreset, then you will need someone at the data center to power the macine off, wait 30 sec, then turn it back on.  Make sure to re-attach via console (&amp;lt;tt&amp;gt;tip virtxx&amp;lt;/tt&amp;gt;) immediately after power down. &lt;br /&gt;
&lt;br /&gt;
=== (Re)attach to the console ===&lt;br /&gt;
Stay on the console the entire time during boot. As the BIOS posts- look out for the RAID card output- does everything look healthy? The output may be scrambled, look for &amp;quot;DEGRADED&amp;quot; or &amp;quot;FAILED&amp;quot;. Once the OS starts booting you will be disconnected (dropped back to the shell on the console server) a couple times during the boot up. The reason you want to quickly re-attach is two-fold: 1. If you don’t reattach quickly then you won’t get any console output, 2. you want to be attached before the server &#039;&#039;potentially&#039;&#039; starts (an extensive) fsck. If you attach after the fsck begins, you’ll have seen no indication it started an fsck and the server will appear frozen during startup- no output, no response. &lt;br /&gt;
&lt;br /&gt;
=== Start containers/VE&#039;s/VPSs ===&lt;br /&gt;
When the machine begins to start VE’s, it’s safe to leave the console and login via ssh. All virts should be set to auto start all the VEs after a crash. Further, most (newer) virts are set to “fastboot” it’s VE’s (to find out, do:&lt;br /&gt;
 grep -i fast /etc/sysconfig/vz &lt;br /&gt;
and look for &amp;lt;tt&amp;gt;VZFASTBOOT=yes&amp;lt;/tt&amp;gt;). If this was set prior to the machine’s crash (setting it after the machine boots will not have any effect until the vz service is restarted) it will start each ve as fast as possible, in serial, then go thru each VE (serially), shutting it down running a vzquota (disk usage) check, then bringing it back up. The benefit is that all VE’s are brought up quickly (within 15min or so depending on the #), the downside is a customer watching closely will notice 2 outages – 1st the machine crash, 2nd their quota check (which will be a much shorter downtime- on the order of a few minutes). &lt;br /&gt;
&lt;br /&gt;
Where “fastboot” is not set to yes (i.e on quar1), vz will start them consecutively, checking the quotas one at a time, and the 60th VE may not start until an hour or two later - this is not acceptable.&lt;br /&gt;
&lt;br /&gt;
The good news is, if you run vzctl start for a VE that is already started, you will simply get an error: &amp;lt;tt&amp;gt;VE is already started&amp;lt;/tt&amp;gt;.  Further, if you attempt to vzctl start a VE that is in the process of being started, you will simply get an error: unable to lock VE.  So, there is no danger in simply running scripts to start smaller sets of VEs.  If the system is not autostarting, then there is no issue, and even if it does, when it conflicts, one process (yours or the autostart) will lose, and just move on to the next one.&lt;br /&gt;
&lt;br /&gt;
A script has been written to assist with ve starts: [[#startvirt.pl|startvirt.pl]] which will start 6 ve’s at once until there are no more left.  If startvirt.pl  is used on a system where “fastboot” was on,  it will circumvent the fastboot for ve’s started by startvirt.pl – they will go through the complete quota check before starting- therefore this is not advisable when a system has crashed. When a system is booted cleanly, and there&#039;s no need for vzquota checks, then startvirt.pl is safe and advisable to run.&lt;br /&gt;
&lt;br /&gt;
=== Make sure all containers are running ===&lt;br /&gt;
You can quickly get a feel for how many ve’s are started by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt4 log]# vs&lt;br /&gt;
VEID 16066 exist mounted running&lt;br /&gt;
VEID 16067 exist mounted running&lt;br /&gt;
VEID 4102 exist mounted running&lt;br /&gt;
VEID 4112 exist mounted running&lt;br /&gt;
VEID 4116 exist mounted running&lt;br /&gt;
VEID 4122 exist mounted running&lt;br /&gt;
VEID 4123 exist mounted running&lt;br /&gt;
VEID 4124 exist mounted running&lt;br /&gt;
VEID 4132 exist mounted running&lt;br /&gt;
VEID 4148 exist mounted running&lt;br /&gt;
VEID 4151 exist mounted running&lt;br /&gt;
VEID 4155 exist mounted running&lt;br /&gt;
VEID 42 exist mounted running&lt;br /&gt;
VEID 432 exist mounted running&lt;br /&gt;
VEID 434 exist mounted running&lt;br /&gt;
VEID 442 exist mounted running&lt;br /&gt;
VEID 450 exist mounted running&lt;br /&gt;
VEID 452 exist mounted running&lt;br /&gt;
VEID 453 exist mounted running&lt;br /&gt;
VEID 454 exist mounted running&lt;br /&gt;
VEID 462 exist mounted running&lt;br /&gt;
VEID 463 exist mounted running&lt;br /&gt;
VEID 464 exist mounted running&lt;br /&gt;
VEID 465 exist mounted running&lt;br /&gt;
VEID 477 exist mounted running&lt;br /&gt;
VEID 484 exist mounted running&lt;br /&gt;
VEID 486 exist mounted running&lt;br /&gt;
VEID 490 exist mounted running&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So to see how many ve’s have started:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep running | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And to see how many haven’t:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | grep down | wc -l&lt;br /&gt;
     0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And how many we should have running:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 root]# vs | wc -l&lt;br /&gt;
     39&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another tool you can use to see which ve’s have started, among other things is [[#vzstat|vzstat]]. It will give you CPU, memory, and other  stats on each ve and the overall system. It’s a good thing to watch as ve’s are starting (note the VENum parameter, it will tell you how many have started):&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;4:37pm, up 3 days,  5:31,  1 user, load average: 1.57, 1.68, 1.79&lt;br /&gt;
VENum 40, procs 1705: running 2, sleeping 1694, unint 0, zombie 9, stopped 0&lt;br /&gt;
CPU [ OK ]: VEs  57%, VE0   0%, user   8%, sys   7%, idle  85%, lat(ms) 412/2&lt;br /&gt;
Mem [ OK ]: total 6057MB, free 9MB/54MB (low/high), lat(ms) 0/0&lt;br /&gt;
Swap [ OK ]: tot 6142MB, free 4953MB, in 0.000MB/s, out 0.000MB/s&lt;br /&gt;
Net [ OK ]: tot: in  0.043MB/s  402pkt/s, out  0.382MB/s 4116pkt/s&lt;br /&gt;
Disks [ OK ]: in 0.002MB/s, out 0.000MB/s&lt;br /&gt;
&lt;br /&gt;
  VEID ST    %VM     %KM         PROC    CPU     SOCK FCNT MLAT IP&lt;br /&gt;
     1 OK 1.0/17  0.0/0.4    0/32/256 0.0/0.5 39/1256    0    9 69.55.227.152&lt;br /&gt;
    21 OK 1.3/39  0.1/0.2    0/46/410 0.2/2.8 23/1860    0    6 69.55.239.60&lt;br /&gt;
   133 OK 3.1/39  0.1/0.3    1/34/410 6.3/2.8 98/1860    0    0 69.55.227.147&lt;br /&gt;
   263 OK 2.3/39  0.1/0.2    0/56/410 0.3/2.8 34/1860    0    1 69.55.237.74&lt;br /&gt;
   456 OK  17/39  0.1/0.2   0/100/410 0.1/2.8 48/1860    0   11 69.55.236.65&lt;br /&gt;
   476 OK 0.6/39  0.0/0.2    0/33/410 0.1/2.8 96/1860    0   10 69.55.227.151&lt;br /&gt;
   524 OK 1.8/39  0.1/0.2    0/33/410 0.0/2.8 28/1860    0    0 69.55.227.153&lt;br /&gt;
   594 OK 3.1/39  0.1/0.2    0/45/410 0.0/2.8 87/1860    0    1 69.55.239.40&lt;br /&gt;
   670 OK 7.7/39  0.2/0.3    0/98/410 0.0/2.8 64/1860    0  216 69.55.225.136&lt;br /&gt;
   691 OK 2.0/39  0.1/0.2    0/31/410 0.0/0.7 25/1860    0    1 69.55.234.96&lt;br /&gt;
   744 OK 0.1/17  0.0/0.5    0/10/410 0.0/0.7  7/1860    0    6 69.55.224.253&lt;br /&gt;
   755 OK 1.1/39  0.0/0.2    0/27/410 0.0/2.8 33/1860    0    0 192.168.1.4&lt;br /&gt;
   835 OK 1.1/39  0.0/0.2    0/19/410 0.0/2.8  5/1860    0    0 69.55.227.134&lt;br /&gt;
   856 OK 0.3/39  0.0/0.2    0/13/410 0.0/2.8 16/1860    0    0 69.55.227.137&lt;br /&gt;
   936 OK 3.2/52  0.2/0.4    0/75/410 0.2/0.7 69/1910    0    8 69.55.224.181&lt;br /&gt;
  1020 OK 3.9/39  0.1/0.2    0/60/410 0.1/0.7 55/1860    0    8 69.55.227.52&lt;br /&gt;
  1027 OK 0.3/39  0.0/0.2    0/14/410 0.0/2.8 17/1860    0    0 69.55.227.83&lt;br /&gt;
  1029 OK 1.9/39  0.1/0.2    0/48/410 0.2/2.8 25/1860    0    5 69.55.227.85&lt;br /&gt;
  1032 OK  12/39  0.1/0.4    0/80/410 0.0/2.8 41/1860    0    8 69.55.227.90&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you are all done, you will want to make sure that all the VEs really did get started, run vs one more time.&lt;br /&gt;
&lt;br /&gt;
Note the time all ve’s are back up and enter that into and save the crash log entry.&lt;br /&gt;
&lt;br /&gt;
Occasionally, a ve will not start automatically. The most common reason for a ve not to come up normally is the ve was at it’s disk limit before the crash, and will not start since they’re over the limit. To overcome this, set the disk space to current usage level (the system will give this to you when it fails to start), start the ve, then re-set the disk space back to the prior level. Lastly, contact the customer to let them know they’re out of disk (or allocate more disk if they&#039;re entitled to more).&lt;br /&gt;
&lt;br /&gt;
== Hitting performance barriers and fixing them ==&lt;br /&gt;
&lt;br /&gt;
There are multiple modes virtuozzo offers to allocate resources to a ve. We utilize 2: SLM and UBC parameters&lt;br /&gt;
On our 4.x systems, we use all SLM – it’s simpler to manage and understand. There are a few systems on virt19/18 that may also use SLM. Everything else uses UBC. &lt;br /&gt;
You can tell a SLM ve by:&lt;br /&gt;
&lt;br /&gt;
 SLMMODE=&amp;quot;all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in their conf file. &lt;br /&gt;
&lt;br /&gt;
TODO: detail SLM modes and parameters.&lt;br /&gt;
&lt;br /&gt;
If someone is in SLM mode and they hit memory resource limits, they simply need to upgrade to more memory.&lt;br /&gt;
&lt;br /&gt;
The following applies to everyone else (UBC).&lt;br /&gt;
&lt;br /&gt;
Customers will often email and say that they are getting out of memory errors - a common one is &amp;quot;cannot fork&amp;quot; ... basically, anytime you see something odd like this, it means they are hitting one of their limits that is in place in their conf file.&lt;br /&gt;
&lt;br /&gt;
The conf file, however, simply shows their limits - how do we know what they are currently at ?&lt;br /&gt;
&lt;br /&gt;
The answer is a file called v - this file contains the current status (and peaks) of their  performance settings, and also counts how many times they have hit the barrier.  The output of the file looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;764: kmemsize         384113     898185    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          0        322        322          0&lt;br /&gt;
     privvmpages        1292       7108     610000     615000          0&lt;br /&gt;
     shmpages            270        528      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc               8         23        410        415          0&lt;br /&gt;
     physpages            48       5624          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages        641       6389      13019 2147483647          0&lt;br /&gt;
     numtcpsock            3         21       1210       1215          0&lt;br /&gt;
     numflock              1          3        107        117          0&lt;br /&gt;
     numpty                0          2         19         19          0&lt;br /&gt;
     numsiginfo            0          4        274        274          0&lt;br /&gt;
     tcpsndbuf             0      80928    1800000    1900000          0 &lt;br /&gt;
     tcprcvbuf             0     108976    1800000    1900000          0&lt;br /&gt;
     othersockbuf       2224      37568     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4272     200000     200000          0&lt;br /&gt;
     numothersock          3          9        650        660          0&lt;br /&gt;
     dcachesize        53922     100320     786432     818029          0&lt;br /&gt;
     numfile             161        382       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column is the name of the counter in question - the same names we saw in the systems conf file.  The second column is the _current_ value of that counter, the third column is the max that that counter has ever risen to, the fourth column is the soft limit, and the fifth column is the hard limit (which is the same as the numbers in that systems conf file).&lt;br /&gt;
&lt;br /&gt;
The sixth number is the failcount - how many times the current usage has risen to hit the barrier.  It will increase as soon as the current usage hits the soft limit.&lt;br /&gt;
&lt;br /&gt;
The problem with /proc/user_beancounters is that it actually contains that set of data for every running VE - so you can&#039;t just cat /proc/user_beancounters - it is too long and you get info for every other running system.&lt;br /&gt;
&lt;br /&gt;
You can vzctl enter the system and run:&lt;br /&gt;
&lt;br /&gt;
 vzctl enter 9999&lt;br /&gt;
 cat /proc/user_beancounters&lt;br /&gt;
&lt;br /&gt;
inside their system, and you will just see the stats for their particular system, but entering their system every time you want to see it is combersome.&lt;br /&gt;
&lt;br /&gt;
So, I wrote a simple script called &amp;quot;vzs&amp;quot; which simply greps for the VEID, and spits out the next 20 or so lines (however many lines there are in the output, I forget) after it.  For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzs 765:&lt;br /&gt;
765: kmemsize        2007936    2562780    8100000    8200000          0&lt;br /&gt;
     lockedpages           0          8        322        322          0&lt;br /&gt;
     privvmpages       26925      71126     610000     615000          0&lt;br /&gt;
     shmpages          16654      16750      33000      34500          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numproc              41         57        410        415          0&lt;br /&gt;
     physpages          1794      49160          0 2147483647          0&lt;br /&gt;
     vmguarpages           0          0      13019 2147483647          0&lt;br /&gt;
     oomguarpages       4780      51270      13019 2147483647          0&lt;br /&gt;
     numtcpsock           23         37       1210       1215          0&lt;br /&gt;
     numflock             17         39        107        117          0&lt;br /&gt;
     numpty                1          3         19         19          0&lt;br /&gt;
     numsiginfo            0          6        274        274          0&lt;br /&gt;
     tcpsndbuf         22240     333600    1800000    1900000          0&lt;br /&gt;
     tcprcvbuf             0     222656    1800000    1900000          0&lt;br /&gt;
     othersockbuf     104528     414944     900000     950000          0&lt;br /&gt;
     dgramrcvbuf           0       4448     200000     200000          0&lt;br /&gt;
     numothersock         73        105        650        660          0&lt;br /&gt;
     dcachesize       247038     309111     786432     818029          0&lt;br /&gt;
     numfile             904       1231       7500       7600          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     dummy                 0          0          0          0          0&lt;br /&gt;
     numiptent             4          4        155        155          0&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That showed us just the portion of /proc/user_beancounters for system 765.&lt;br /&gt;
&lt;br /&gt;
When you run the vzs command, always add a : after the VEID.&lt;br /&gt;
&lt;br /&gt;
So, if a customer complains about some out of memory errors, or no more files, or no more ptys, or just has an unspecific complain about processes dying, etc., the very first thing you need to do is check their beancounters with vzs.  Usually you will spot an item that has a high failcount and needs to be upped.&lt;br /&gt;
&lt;br /&gt;
At that point you could simply up the counter with `vzctl set`.  Generally pick a number 10-20% higher than the old one, and make the hard limit slightly larger than the the soft limit. However our systems now come in several levels and those levels have more/different memory allocations. If someone is complaining about something other than a memory limit (pty, numiptent, numflock), it’s generally safe to increase it, at least to the same level as what’s in the /vzconf/4unlimited file on the newest virt. If someone is hitting a memory limit, first make sure they are given what they deserve:&lt;br /&gt;
&lt;br /&gt;
(refer to mgmt -&amp;gt; payments -&amp;gt; packages)&lt;br /&gt;
&lt;br /&gt;
To set those levels, you use the [[#setmem|setmem]] command. &lt;br /&gt;
&lt;br /&gt;
The alternate (DEPRECATED) method would be to use one of 3 commands:&lt;br /&gt;
256 &amp;lt;veid&amp;gt;&lt;br /&gt;
300 &amp;lt;veid&amp;gt;&lt;br /&gt;
384 &amp;lt;veid&amp;gt;&lt;br /&gt;
512 &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the levels were not right (you’d run vzs &amp;lt;veid&amp;gt; before and after to see the effect) tell the customer they’ve been adjusted and be done with it. If the levels were right, tell the customer they must upgrade to a higher package, tell them how to see level (control panel) and that they can reboot their system to escape this lockup contidion.&lt;br /&gt;
&lt;br /&gt;
Customers can also complain that their site is totally unreachable, or complain that it is down ... if the underlying machine is up, and all seems well, you may notice in the beancounters that network-specific counters are failing - such as numtcpsock, tcpsndbuf or tcprcvbuf.  This will keep them from talking on the network and make it seem like their system is down.  Again, just up the limits and things should be fine.&lt;br /&gt;
&lt;br /&gt;
On virts 1-4, you should first look at the default settings for that item on a later virt, such as virt 8 - we have increased the defaults a lot since the early machines.  So, if you are going to up a counter on virt2, instead of upping it by 10-20%, instead up it to the new default that you see on virt8.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE to another virt (migrate/migrateonline) ==&lt;br /&gt;
&lt;br /&gt;
This will take a while to complete - and it is best to do this at night when the load is light on both machines.&lt;br /&gt;
&lt;br /&gt;
There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. &lt;br /&gt;
To check which version is running: &lt;br /&gt;
 [root@virt12 private]# cat /etc/virtuozzo-release&lt;br /&gt;
 Virtuozzo release 2.6.0&lt;br /&gt;
&lt;br /&gt;
Ok, let&#039;s say that the VE is 1212, and vital stats are:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1212&lt;br /&gt;
VE_ROOT=&amp;quot;/vz1/root/1212&amp;quot;&lt;br /&gt;
VE_PRIVATE=&amp;quot;/vz1/private/1212&amp;quot;&lt;br /&gt;
OSTEMPLATE=&amp;quot;fedora-core-2/20040903&amp;quot;&lt;br /&gt;
IP_ADDRESS=&amp;quot;69.55.229.84&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040&lt;br /&gt;
818&amp;quot;&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# vzctl exec 1212 df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  405M  3.7G  10% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is &#039;&#039;&#039;vital&#039;&#039;&#039; that &#039;&#039;&#039;all&#039;&#039;&#039; these templates exist on the dst system. To confirm that, on the dst system run:&lt;br /&gt;
&lt;br /&gt;
For &amp;lt; 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt14 private]# vzpkgls | grep fc2&lt;br /&gt;
devel-fc2 20040903&lt;br /&gt;
PostNuke-fc2 20040824&lt;br /&gt;
analog-fc2 20040824&lt;br /&gt;
awstats-fc2 20040824&lt;br /&gt;
bbClone-fc2 20040824&lt;br /&gt;
jdk-fc2 20040823&lt;br /&gt;
jre-fc2 20040823&lt;br /&gt;
mailman-fc2 20040823&lt;br /&gt;
mod_frontpage-fc2 20040816&lt;br /&gt;
mod_perl-fc2 20040812&lt;br /&gt;
mod_ssl-fc2 20040811&lt;br /&gt;
mysql-fc2 20040812&lt;br /&gt;
openwebmail-fc2 20040817&lt;br /&gt;
php-fc2 20040813&lt;br /&gt;
phpBB-fc2 20040831&lt;br /&gt;
postgresql-fc2 20040813&lt;br /&gt;
proftpd-fc2 20040818&lt;br /&gt;
sl-webalizer-fc2 20040818&lt;br /&gt;
spamassassin-fc2 20040910&lt;br /&gt;
tomcat-fc2 20040823&lt;br /&gt;
usermin-fc2 20040909&lt;br /&gt;
uw-imap-fc2 20040830&lt;br /&gt;
webmin-fc2 20040909&lt;br /&gt;
[root@virt14 private]# vzpkgls | grep fedora&lt;br /&gt;
fedora-core-1 20040121 20040818&lt;br /&gt;
fedora-core-devel-1 20040121 20040818&lt;br /&gt;
fedora-core-2 20040903&lt;br /&gt;
[root@virt14 private]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For these older systems, you can simply match up the date on the template. &lt;br /&gt;
&lt;br /&gt;
For &amp;gt;= 3.0:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt19 /vz2/private]# vzpkg list&lt;br /&gt;
centos-5-x86                    2008-01-07 22:05:57&lt;br /&gt;
centos-5-x86    devel&lt;br /&gt;
centos-5-x86    jre&lt;br /&gt;
centos-5-x86    jsdk&lt;br /&gt;
centos-5-x86    mod_perl&lt;br /&gt;
centos-5-x86    mod_ssl&lt;br /&gt;
centos-5-x86    mysql&lt;br /&gt;
centos-5-x86    php&lt;br /&gt;
centos-5-x86    plesk9&lt;br /&gt;
centos-5-x86    plesk9-antivirus&lt;br /&gt;
centos-5-x86    plesk9-api&lt;br /&gt;
centos-5-x86    plesk9-atmail&lt;br /&gt;
centos-5-x86    plesk9-backup&lt;br /&gt;
centos-5-x86    plesk9-horde&lt;br /&gt;
centos-5-x86    plesk9-mailman&lt;br /&gt;
centos-5-x86    plesk9-mod-bw&lt;br /&gt;
centos-5-x86    plesk9-postfix&lt;br /&gt;
centos-5-x86    plesk9-ppwse&lt;br /&gt;
centos-5-x86    plesk9-psa-firewall&lt;br /&gt;
centos-5-x86    plesk9-psa-vpn&lt;br /&gt;
centos-5-x86    plesk9-psa-fileserver&lt;br /&gt;
centos-5-x86    plesk9-qmail&lt;br /&gt;
centos-5-x86    plesk9-sb-publish&lt;br /&gt;
centos-5-x86    plesk9-vault&lt;br /&gt;
centos-5-x86    plesk9-vault-most-popular&lt;br /&gt;
centos-5-x86    plesk9-watchdog&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On these newer systems, it&#039;s difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn&#039;t mean all the same packages are there on the dst. To truly know, you must perform a sample rsync:&lt;br /&gt;
&lt;br /&gt;
 rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/&lt;br /&gt;
&lt;br /&gt;
if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you&#039;ve now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space).&lt;br /&gt;
&lt;br /&gt;
If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example:&lt;br /&gt;
&lt;br /&gt;
 rsync -av -e ssh --stats --exclude=x86/config  /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/&lt;br /&gt;
&lt;br /&gt;
Which will avoid this directory and contents:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config*&lt;br /&gt;
app  os&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one.&lt;br /&gt;
&lt;br /&gt;
If the dst system was missing a template, you have 2 choices: &lt;br /&gt;
# put the missing template on the dst system. 2 choices here: &lt;br /&gt;
## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or &lt;br /&gt;
## rsync over the template (found under /vz/template) - see above&lt;br /&gt;
# put the ve on a system which has all the proper templates&lt;br /&gt;
&lt;br /&gt;
=== pre-seeding a migration ===&lt;br /&gt;
&lt;br /&gt;
When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer&#039;s data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). &lt;br /&gt;
&lt;br /&gt;
We believe vzmigrate uses rsync to do it&#039;s transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. &lt;br /&gt;
&lt;br /&gt;
There&#039;s no magic to a pre-seed, you just need to make sure it&#039;s named correctly.&lt;br /&gt;
&lt;br /&gt;
Given:&lt;br /&gt;
&lt;br /&gt;
source: /vz1/private/1234&lt;br /&gt;
&lt;br /&gt;
and you want to migrate to /vz2 on the target system, your rsync would look like:&lt;br /&gt;
&lt;br /&gt;
 rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/&lt;br /&gt;
&lt;br /&gt;
After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it&#039;ll be much much faster than had you just started the migration from scratch.&lt;br /&gt;
&lt;br /&gt;
Further, as we discuss elsewhere in this topic, a failed migration can be moved from &amp;lt;tt&amp;gt;/vz/private/1234&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/vz/private/1234.migrated&amp;lt;/tt&amp;gt; on the destination if you want to restart a failed migration. This should &#039;&#039;&#039;only&#039;&#039;&#039; be done if the migration failed and the CT is not running on the destination HN.&lt;br /&gt;
&lt;br /&gt;
=== migrateonline intructions: src &amp;gt;=3.x -&amp;gt; dst&amp;gt;=3.x ===&lt;br /&gt;
&lt;br /&gt;
A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; is equivalent to &amp;lt;tt&amp;gt;migrateonline&amp;lt;/tt&amp;gt;, but will &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; a ve AND restart it in the process.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrateonline&lt;br /&gt;
usage: /usr/local/sbin/migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine.&lt;br /&gt;
&lt;br /&gt;
If they had backups, use the mvbackups command to move their backups to the new server:&lt;br /&gt;
&lt;br /&gt;
 mvbackups 1212 virt14 vz&lt;br /&gt;
&lt;br /&gt;
Rename the ve&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems in mgmt to reflect the new path and server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve.&lt;br /&gt;
Before you try again, make sure of a few things:&lt;br /&gt;
&lt;br /&gt;
Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places:&lt;br /&gt;
(given the example above)&lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212&lt;br /&gt;
&lt;br /&gt;
or &lt;br /&gt;
&lt;br /&gt;
 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
before you run migrate again, you&#039;ll want to rename so that all data is in &lt;br /&gt;
1212.migrated:&lt;br /&gt;
&lt;br /&gt;
 mv /vz/private/1212 /vz/private/1212.migrated&lt;br /&gt;
&lt;br /&gt;
this way, it will pick up where it left off and transfer only new files.&lt;br /&gt;
&lt;br /&gt;
Likewise, if you want to speed up a migration, you can pre-seed the dst as follows:&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/&lt;br /&gt;
&lt;br /&gt;
then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly&lt;br /&gt;
&lt;br /&gt;
=== migrateonline/migrate failures (migrate manually) ===&lt;br /&gt;
&lt;br /&gt;
Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time).&lt;br /&gt;
&lt;br /&gt;
You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we&#039;ve seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress):&lt;br /&gt;
 for f in `find / -size +1048576k`; do ls -lh $f; done&lt;br /&gt;
&lt;br /&gt;
You may also see migration failing due to quota issues.&lt;br /&gt;
&lt;br /&gt;
You can try to resolve by copying any quota file into the file you need:&lt;br /&gt;
&lt;br /&gt;
 cp /var/vzquota/quota.1 /var/vzquota/quota.xxx&lt;br /&gt;
&lt;br /&gt;
If it complains about quota running you should then be able to stop it&lt;br /&gt;
&lt;br /&gt;
 vzquota off xxxx&lt;br /&gt;
&lt;br /&gt;
If all else fails, migrate to a new VEID&lt;br /&gt;
i.e. 1234 becomes 12341&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the rsync or [[#migrate|migrate]] fails, you can always move someone manually:&lt;br /&gt;
&lt;br /&gt;
1. stop ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v stop 1234&lt;br /&gt;
&lt;br /&gt;
2. copy over data&amp;lt;br&amp;gt;&lt;br /&gt;
 rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/&lt;br /&gt;
&lt;br /&gt;
NOTE: if you&#039;ve previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync)&lt;br /&gt;
&lt;br /&gt;
3. copy over conf&amp;lt;br&amp;gt;&lt;br /&gt;
 scp /vzconf/1234.conf root@1.1.1.1:/vzconf&lt;br /&gt;
&lt;br /&gt;
4. on dst, edit the conf to reflect the right vzX dir&amp;lt;br&amp;gt;&lt;br /&gt;
 vi /vzconf/1234.conf&lt;br /&gt;
&lt;br /&gt;
5. on src remove the IPs&amp;lt;br&amp;gt;&lt;br /&gt;
 ipdel 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
6. on dst add IPs &amp;lt;br&amp;gt;&lt;br /&gt;
 ipadd 1234 2.2.2.2 3.3.3.3&lt;br /&gt;
&lt;br /&gt;
7. on dst, start ve: &amp;lt;br&amp;gt;&lt;br /&gt;
 v start 1324&lt;br /&gt;
&lt;br /&gt;
8. cancel, then archive ve on src per above instrs.&lt;br /&gt;
&lt;br /&gt;
=== migrate src=2.6.0 -&amp;gt; dst&amp;gt;=2.6.0, or mass-migration with customer notify ===&lt;br /&gt;
&lt;br /&gt;
A script called &amp;lt;tt&amp;gt;migrate&amp;lt;/tt&amp;gt; was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 &#039;&#039;&#039;kernel&#039;&#039;&#039; machine to another 2.6 kernel machine without restarting the ve.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# migrate&lt;br /&gt;
usage: /usr/local/sbin/migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
[root@virt12 sbin]# migrate 10.1.4.64 1212 vz&lt;br /&gt;
starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005&lt;br /&gt;
Turning off offline_management&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
migrating with no start on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
clearing the arp cache&lt;br /&gt;
now going to 10.1.4.64 and clear cache and starting&lt;br /&gt;
starting it&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which migrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
 [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
&lt;br /&gt;
On 2.6.1 you’ll also have to move the private area:&lt;br /&gt;
 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say &#039;&#039;&#039;no&#039;&#039;&#039; to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== vzmigrate: src=2.6.1 -&amp;gt; dst&amp;gt;=2.6.0 ===&lt;br /&gt;
&lt;br /&gt;
This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]#  vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212&lt;br /&gt;
migrating on 10.1.4.64&lt;br /&gt;
Connection to destination HN (10.1.4.64) is successfully established&lt;br /&gt;
Moving/copying VE#1212 -&amp;gt; VE#1212, [/vz/private/1212], [/vz/root/1212] ...&lt;br /&gt;
Syncing private area &#039;/vz1/private/1212&#039;&lt;br /&gt;
- 100% |*************************************************|&lt;br /&gt;
done&lt;br /&gt;
Successfully completed&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es) to pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding port redirection to VE(1): 4643 8443&lt;br /&gt;
Adding IP address(es): 69.55.229.84&lt;br /&gt;
Hostname for VE set: fourmajor.com&lt;br /&gt;
File resolv.conf was modified&lt;br /&gt;
VE start in progress...&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail).&lt;br /&gt;
Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf&lt;br /&gt;
[root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212&lt;br /&gt;
&lt;br /&gt;
[root@virt12 sbin]# cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, &amp;lt;tt&amp;gt;cancelve&amp;lt;/tt&amp;gt; would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
=== src=2.5.x ===&lt;br /&gt;
&lt;br /&gt;
First, go to the private dir:&lt;br /&gt;
&lt;br /&gt;
 cd /vz1/private/&lt;br /&gt;
&lt;br /&gt;
Stop the VE - make sure it stops totally cleanly.&lt;br /&gt;
 &lt;br /&gt;
 vzctl stop 1212&lt;br /&gt;
&lt;br /&gt;
Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vemove&lt;br /&gt;
ERROR: Usage: vemove veid target_ip target_path_dir&lt;br /&gt;
[root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212&lt;br /&gt;
tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 &amp;quot;split - -b 1024m /vz/private/1212.tar&amp;quot; )&lt;br /&gt;
scp /vzconf/1212.conf 10.1.4.64:/vzconf&lt;br /&gt;
cancelve 1212&lt;br /&gt;
v stop 1212&lt;br /&gt;
v set 1212 --offline_management=no --save&lt;br /&gt;
Delete port redirection&lt;br /&gt;
Deleting IP address(es) from pool: 69.55.229.84&lt;br /&gt;
Saved parameters for VE 1212&lt;br /&gt;
mv /vzconf/1212.conf /vzconf/deprecated-1212&lt;br /&gt;
mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414&lt;br /&gt;
don&#039;t forget to remove firewall rules and domains!&lt;br /&gt;
[root@virt12 sbin]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt.&lt;br /&gt;
Then go to backup2 and move the dirs. So you’d do something like this:&lt;br /&gt;
 mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/&lt;br /&gt;
&lt;br /&gt;
We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. &lt;br /&gt;
If moving to the same drive, you can safely preserve hardlinks and move all files with:&lt;br /&gt;
 sh&lt;br /&gt;
 for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212  /mnt/data1/virt14/$f/vz/private/; done&lt;br /&gt;
&lt;br /&gt;
When you are done, go to /vz/private on the dst virt you will have files like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;1212.taraa&lt;br /&gt;
1212.tarab&lt;br /&gt;
1212.tarac&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each one 1024m (or less, for the last one) in size.&lt;br /&gt;
&lt;br /&gt;
on the dst server and run:&lt;br /&gt;
&lt;br /&gt;
 cat 1212.tar?? | tar xpPBf -&lt;br /&gt;
&lt;br /&gt;
and after 20 mins or so it will be totally untarred.  Now since the conf&lt;br /&gt;
file is already there, you can go ahead and start the system.&lt;br /&gt;
&lt;br /&gt;
 vzctl start 1212&lt;br /&gt;
&lt;br /&gt;
Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date.&lt;br /&gt;
&lt;br /&gt;
NOTE: you MUST tar the system up using the virtuozzo version of tar that&lt;br /&gt;
is on all the virt systems, and further you MUST untar the tarball with&lt;br /&gt;
the virtuozzo tar, using these options:  `&amp;lt;tt&amp;gt;tar xpPBf -&amp;lt;/tt&amp;gt;`&lt;br /&gt;
&lt;br /&gt;
If you tar up an entire VE and move it to a non-virtuozzo machine, that is&lt;br /&gt;
ok, and you can untar it there with normal tar commands, but do not untar&lt;br /&gt;
it and then repack it with a normal tar and expect it to work - you need&lt;br /&gt;
to use virtuozzo tar commands on virtuozzo tarballs to make it work.&lt;br /&gt;
&lt;br /&gt;
The backups are sort of an exception, since we are just (usually)&lt;br /&gt;
restoring user data that was created after we gave them the system, and&lt;br /&gt;
therefore has nothing to do with magic symlinks or vz-rpms, etc.&lt;br /&gt;
&lt;br /&gt;
== Moving a VE on the same virt ==&lt;br /&gt;
&lt;br /&gt;
Easy way:&amp;lt;br&amp;gt;&lt;br /&gt;
Scenario 1: ve 123 is to be renamed 1231 and moved from vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:1231:/vz/private/1231:/vz/root/1231&lt;br /&gt;
&lt;br /&gt;
Scenario 2: ve 123 is to be moved vz1 to vz&lt;br /&gt;
&lt;br /&gt;
 vzmlocal 123:123:/vz/private/123:/vz/root/123&lt;br /&gt;
&lt;br /&gt;
vzmlocal will reboot the ve at the end of the move&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; don&#039;t forget to fix /usr/local/sbin/backup.config for new disk location&lt;br /&gt;
&lt;br /&gt;
Manual/old way:&lt;br /&gt;
&lt;br /&gt;
1) &amp;lt;tt&amp;gt;vzctl stop 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
2) &amp;lt;tt&amp;gt;mv /vz1/private/123 /vz/private/.&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
(or cp -a if you want to copy)&lt;br /&gt;
3) in &amp;lt;tt&amp;gt;/etc/sysconfig/vz-scripts/123.conf&amp;lt;/tt&amp;gt; change value&amp;lt;br&amp;gt;&lt;br /&gt;
of &#039;&amp;lt;tt&amp;gt;VE_PRIVATE&amp;lt;/tt&amp;gt;&#039; variable to point to a new private area location&lt;br /&gt;
4) &amp;lt;tt&amp;gt;vzctl start 123&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
5) update backups if needed: &amp;lt;tt&amp;gt;mvbackups 123 virtX virt1 vz&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
6) update management scerens&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes: a) absolute path to private area is stored in quota file &amp;lt;tt&amp;gt;/var/vzquota/quota.123&amp;lt;/tt&amp;gt; - so during first startup quota will be recalculated.&amp;lt;br&amp;gt;&lt;br /&gt;
b) if you&#039;re going to write some script to do a job, you MUST be sure that $VEID won&#039;t be expanded to &#039;&#039; in ve config file - ie. you need to escape &#039;$&#039;. Otherwise you might have:&lt;br /&gt;
&lt;br /&gt;
 VE_PRIVATE=&amp;quot;/vz/private/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
in config, and &#039;vzctl destroy&#039; for this VE ID &#039;&#039;&#039;will remove everything under /vz/private/ directory&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Adding a veth device to a VE ==&lt;br /&gt;
&lt;br /&gt;
Not totally sure what this is, but a customer asked for it and here&#039;s what we did (as instructed by vz support):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;v set 99 --netif_add eth99  --save&lt;br /&gt;
ipdel 99 69.55.230.58&lt;br /&gt;
v set 99 --ifname eth99 --ipadd 69.55.230.58 --save&lt;br /&gt;
v set 99 --ifname eth99 --gateway 69.55.230.1 --save&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active      eth0              veth77.77,veth99.99&lt;br /&gt;
veth_net          active&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
br99             bridge     net99&lt;br /&gt;
veth99.99        veth       net99&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        net99            69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br0 eth0&lt;br /&gt;
&lt;br /&gt;
(will remove 99 from orig net and move to veth_net)&lt;br /&gt;
vznetcfg net addif veth_net veth99.99&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
net99             active&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
(delete the old crap)&lt;br /&gt;
vznetcfg net del net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, to add another device in&lt;br /&gt;
&lt;br /&gt;
v set 77 --netif_add eth77  --save&lt;br /&gt;
ipdel 77 69.55.230.78&lt;br /&gt;
v set 77 --ifname eth77 --ipadd 69.55.230.78 --save&lt;br /&gt;
v set 77 --ifname eth77 --gateway 69.55.230.1 --save&lt;br /&gt;
v set 77 --save --ifname eth77 --network veth_net&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif veth_net veth77.77&lt;br /&gt;
&lt;br /&gt;
# vznetcfg if list&lt;br /&gt;
Name             Type       Network ID       Addresses&lt;br /&gt;
veth77.77        veth       veth_net&lt;br /&gt;
br0              bridge     veth_net&lt;br /&gt;
veth99.99        veth       veth_net&lt;br /&gt;
eth1             nic                         10.1.4.62/24&lt;br /&gt;
eth0             nic        veth_net         69.55.227.70/24&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net list&lt;br /&gt;
Network ID        Status      Master Interface  Slave Interfaces&lt;br /&gt;
veth_net          active      eth0              veth77.77,veth99.99&lt;br /&gt;
&lt;br /&gt;
another example&lt;br /&gt;
&lt;br /&gt;
v set 1182 --netif_add eth1182  --save&lt;br /&gt;
ipdel 1182 69.55.236.217&lt;br /&gt;
v set 1182 --ifname eth1182 --ipadd 69.55.236.217 --save&lt;br /&gt;
v set 1182 --ifname eth1182 --gateway 69.55.236.1 --save&lt;br /&gt;
vznetcfg net addif veth_net veth1182.1182&lt;br /&gt;
v set 1182 --save --ifname eth1182 --network veth_net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unused/not working commands:&lt;br /&gt;
ifconfig veth99.0 0&lt;br /&gt;
vznetcfg net list&lt;br /&gt;
vznetcfg br new br99 net99&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg br show&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
---------&lt;br /&gt;
&lt;br /&gt;
vznetcfg br new br1182 net1182&lt;br /&gt;
&lt;br /&gt;
vznetcfg br attach br99 eth0&lt;br /&gt;
vznetcfg if list&lt;br /&gt;
vznetcfg net addif net99 veth99.99&lt;br /&gt;
vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vznetcfg net addif eth0 net1182&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&lt;br /&gt;
--- 8&amp;lt; ---&lt;br /&gt;
&lt;br /&gt;
vznetcfg net new net&lt;br /&gt;
# vznetcfg net addif eth0 net99&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# vzctl set 99 --save --netif_add eth0 (at this stage veth99.0 interface have to appear&lt;br /&gt;
on node)&lt;br /&gt;
# vzctl set 99 --save --ifname eth0 --ipadd 69.55.230.58 (and probably few more arguments&lt;br /&gt;
here - see &#039;man vzctl&#039;)&lt;br /&gt;
# vznetcfg net addif veth99.0 net99&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Assigning/remove ip from a VE ==&lt;br /&gt;
&lt;br /&gt;
1. Add or remove ips:&lt;br /&gt;
 ipdel 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
 ipadd 1234 1.1.1.1 2.2.2.2&lt;br /&gt;
&lt;br /&gt;
2. update Mgmt screens&lt;br /&gt;
&lt;br /&gt;
3. offer to update any DNS we do for them&lt;br /&gt;
&lt;br /&gt;
4. check to see if we had rules for old IP in firwall&lt;br /&gt;
&lt;br /&gt;
== Enabling tun device for a ve ==&lt;br /&gt;
Note, there’s a command for this: [[#addtun|addtun]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
Make sure the tun.o module is already loaded before Virtuozzo is started: &lt;br /&gt;
 lsmod &lt;br /&gt;
Allow the VPS to use the TUN/TAP device: &lt;br /&gt;
 vzctl set 101 --devices c:10:200:rw --save &lt;br /&gt;
Create the corresponding device inside the VPS and set the proper permissions: &lt;br /&gt;
 vzctl exec 101 mkdir -p /dev/net &lt;br /&gt;
 vzctl exec 101 mknod /dev/net/tun c 10 200 &lt;br /&gt;
 vzctl exec 101 chmod 600 /dev/net/tun&lt;br /&gt;
&lt;br /&gt;
== Remaking a system (on same virt) ==&lt;br /&gt;
&lt;br /&gt;
1. [[#cancelve|cancelve]] (or v destroy x - ONLY if you&#039;re POSITIVE no data needs to be saved)&lt;br /&gt;
&lt;br /&gt;
2. [[#vemake|vemake]] using same veid&lt;br /&gt;
&lt;br /&gt;
3. [[#mvbackups|mvbackups]] or [[#vb|vb]] (if new mount point)&lt;br /&gt;
&lt;br /&gt;
4. update mgmt with new dir/ip &lt;br /&gt;
&lt;br /&gt;
5. update firewall if changed ip&lt;br /&gt;
&lt;br /&gt;
== Re-initialize quota for a VE ==&lt;br /&gt;
&lt;br /&gt;
There’s a commamd for this now: [[#clearquota|clearquota]]&lt;br /&gt;
&lt;br /&gt;
For posterity:&lt;br /&gt;
&lt;br /&gt;
vzctl stop 1&lt;br /&gt;
vzquota drop 1&lt;br /&gt;
vzctl start 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Traffic accounting on linux ==&lt;br /&gt;
&lt;br /&gt;
DEPRECATED - all tracking is done via bwdb now. This is how we used to track traffic.&lt;br /&gt;
&lt;br /&gt;
TODO: update for diff versions of vz&lt;br /&gt;
&lt;br /&gt;
Unlike FreeBSD, where we have to add firewall count rules to the system to count the traffic, on virtuozzo counts the traffic for us.  You an see the current traffic stats by running `vznetstat`:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vznetstat&lt;br /&gt;
VEID    Net.Class  Output(bytes)   Input(bytes)&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
24218     1            484M             39M&lt;br /&gt;
24245     1            463M            143M&lt;br /&gt;
2451      1           2224M            265M&lt;br /&gt;
2454      1           2616M            385M&lt;br /&gt;
4149      1            125M             68M&lt;br /&gt;
418       1           1560M             34M&lt;br /&gt;
472       1           1219M            315M&lt;br /&gt;
726       1            628M            317M&lt;br /&gt;
763       1            223M             82M&lt;br /&gt;
771       1           4234M            437M&lt;br /&gt;
-----------------------------------------------&lt;br /&gt;
Total:               13780M           2090M&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see the VEID is on a line with the in and out bytes.  So, we simply run a cron job:&lt;br /&gt;
&lt;br /&gt;
 4,9,14,19,24,29,34,39,44,49,55,59 * * * * /root/vztrafdump.sh&lt;br /&gt;
&lt;br /&gt;
Just like we do on FreeBSD - this one goes through all the VEs in /vz/private and greps the line from vznetstat that matches them and dumps it in /jc_traffic_dump on their system.  Then it does it again for all the VEs in /vz1/private.  It is important to note that vznetstat runs only once, and the grepping is done from a temporary file that contains that output - we do this because running vznetstat once for each VE that we read out of /vz/private and /vz1/private would take way too long and be too intensive.&lt;br /&gt;
&lt;br /&gt;
You do not need to do anything to facilitate this other than make sure that that cron job is running - the vznetstat counters are always running, and any new VEs that are added to the system will be accounted for automatically.&lt;br /&gt;
&lt;br /&gt;
Traffic resetting no longer works with vz 2.6, so we disable the vztrafdump.sh on those virts.&lt;br /&gt;
&lt;br /&gt;
== Watchdog script ==&lt;br /&gt;
&lt;br /&gt;
On some of the older virts, we have a watchdog running that kills procs that are deemed bad per the following:&lt;br /&gt;
&lt;br /&gt;
/root/watchdog from quar1&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;pre&amp;gt;if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [5-9]...&lt;br /&gt;
  then&lt;br /&gt;
# 50-90%&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;...:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;....m&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# running for &amp;gt; 1000min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
  if echo $line | awk &#039;{print $(NF-3)}&#039; | grep [1-9]...&lt;br /&gt;
  then&lt;br /&gt;
# running for 10-90 percent&lt;br /&gt;
    if echo $line | awk &#039;{print $NF}&#039; | egrep &#039;cfusion|counter|vchkpw&#039;&lt;br /&gt;
    then&lt;br /&gt;
&lt;br /&gt;
      if echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[2-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# between 2-9min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      elif echo $line | awk &#039;{print $(NF-1)}&#039; | grep &amp;quot;[0-9][0-9]:..&amp;quot;&lt;br /&gt;
      then&lt;br /&gt;
# up to 99min&lt;br /&gt;
        echo $line &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        /usr/sbin/vzpid $pid | tail -1 &amp;gt;&amp;gt; /root/wd.log&lt;br /&gt;
        kill -9 $pid&lt;br /&gt;
&lt;br /&gt;
      fi&lt;br /&gt;
    fi&lt;br /&gt;
  fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Misc Linux Items ==&lt;br /&gt;
&lt;br /&gt;
We are overselling hard drive space ... when you configure a linux system with a certain amount of disk space (the default is 4gigs) you do not actually use up 4gigs of space on the system.  The diskspace setting for a user is simply a cap, and they only use up as much space on the actual disk drive as they are actually using.&lt;br /&gt;
&lt;br /&gt;
When you create a new linux system, even though there are some 300 RPMs or so installed, if you run `df -k` you will see that the entire 4gig partition is empty - no space is being used.  This is because the files in their system are &amp;quot;magic symlinks&amp;quot; to the template for their OS that is in /vz/template - however, any changes to any of those files will &amp;quot;disconnect&amp;quot; them and they will immediately begin using space in their system.  Further, any new files uploaded (even if those new files overwrite existing files) will take up space on the partition.&lt;br /&gt;
&lt;br /&gt;
=== Cant change file /etc/sysconfig/network ===&lt;br /&gt;
if you see this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt8 root]# vzctl stop 160 ; vzctl start 160&lt;br /&gt;
VE is not running&lt;br /&gt;
Starting VE ...&lt;br /&gt;
VE is unmounted&lt;br /&gt;
VE is mounted&lt;br /&gt;
Adding IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
bash ERROR: Can&#039;t change file /etc/sysconfig/network&lt;br /&gt;
Deleting IP address(es): 69.55.226.83 69.55.226.84&lt;br /&gt;
VE is unmounted&lt;br /&gt;
[root@virt8 root]#&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
it probably means they no longer have /bin/bash - copy one in for them&lt;br /&gt;
 &lt;br /&gt;
ALSO: another possibility is that they have removed the `ed` RPM from their system - it needs to be reinstalled into their system.  But since their system is down, this is tricky ...&lt;br /&gt;
&lt;br /&gt;
VE startup scripts used by &#039;vzctl&#039; want package &#039;ed&#039; to be available inside VE. So if package &#039;ed&#039; will be enabled in OS template config and OS template itself VE #827 is based on - this error should be fixed.&lt;br /&gt;
&lt;br /&gt;
yes, it is possible to add RPM to VE while it not running.&lt;br /&gt;
Try to do following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd /vz/template/&amp;lt;OS_template_with_ed_package&amp;gt;/&lt;br /&gt;
# vzctl mount 827&lt;br /&gt;
# rpm -Uvh --root /vz/root/827 --veid 827 ed-0.2-25.i386.vz.rpm&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usually theres an error, but its ok&lt;br /&gt;
&lt;br /&gt;
Note: replace &#039;ed-0.2-25.i386.vz.rpm&#039; in last command with actual&lt;br /&gt;
version of &#039;ed&#039; package you have.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== What template is user using ===&lt;br /&gt;
So how do I know what template the user has ?  cat their conf file and it is listed in there.  For example, if the conf file has:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt12 sbin]# vc 1103&lt;br /&gt;
…snip…&lt;br /&gt;
OSTEMPLATE=&amp;quot;debian-3.0/20030822&amp;quot;&lt;br /&gt;
TEMPLATES=&amp;quot;mod_perl-deb30/20030707 mod_ssl-deb30/20030703 mysql-deb30/20030707 proftpd-deb30/20030703 webmin-deb30/20030823 &amp;quot;&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
then they are on debian 3.0, all of their system RPMs are in /vz/template/debian-3.0, and they are using version 20030822 of that debian 3.0 template. Also, they’ve also got additional packages installed (mod_perl, mod_ssl, etc).  Those are also found under /vz/template&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Edits needed to run java ===&lt;br /&gt;
&lt;br /&gt;
Edits needed to run java:&lt;br /&gt;
&lt;br /&gt;
When we first created the VEs, the default setting for privvmpages was 93000:94000 ... which was high enough that most people never had problems ... however, you can;t run java or jdk or tomcat or anything java related with that setting.  We have found that by setting privvmpages to 610000:615000 that java runs just fine.  That is now the default setting. It is exceedingly rare that anyone needs it higher than that, although we have seen it once or twice.&lt;br /&gt;
&lt;br /&gt;
Any problems with java at all - the first thing you need to do is see if the failcnt has raised for privvmpages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# vzctl start 160&lt;br /&gt;
Starting VE ...&lt;br /&gt;
vzquota : (error) Quota on syscall for 160: Device or resource busy&lt;br /&gt;
Running vzquota on failed for VE 160 [3]&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is because my pwd is _in_ their private directory - you can&#039;t start it until you move out&lt;br /&gt;
&lt;br /&gt;
People seem to have trouble with php if they are clueless newbies.  Here are two common problems/solutions:&lt;br /&gt;
&lt;br /&gt;
no... but i figured it out myself. problem was the php.ini file that came&lt;br /&gt;
vanilla with the account was not configured to work with apache (the&lt;br /&gt;
ENGINE directive was set to off).&lt;br /&gt;
&lt;br /&gt;
everything else seems fine now.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
the problem was in the php.ini file.  I noticed that is wasnt showing&lt;br /&gt;
the code when it was in an html file so I looked at the php.ini file&lt;br /&gt;
and had to change it so it recognized &amp;lt;? tags aswell as &amp;lt;?php tags.&lt;br /&gt;
&lt;br /&gt;
Also, make sure added to httpd.conf&lt;br /&gt;
    AddType application/x-httpd-php .php&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Set time zone ===&lt;br /&gt;
&lt;br /&gt;
You can set the time zone:&lt;br /&gt;
&lt;br /&gt;
You can change the timezone by doing this:&lt;br /&gt;
&lt;br /&gt;
 ln -sf /usr/share/zoneinfo/&amp;lt;zone&amp;gt; /etc/localtime&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;zone&amp;gt; is the zone you want in the /usr/share/zoneinfo/ directory.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Failing shm_open calls ===&lt;br /&gt;
&lt;br /&gt;
Failing shm_open calls:&lt;br /&gt;
&lt;br /&gt;
first, please check if /dev/shm is mounted inside VE.&lt;br /&gt;
&#039;cat /proc/mounts&#039; command should show something like this:&lt;br /&gt;
 tmpfs /dev/shm tmpfs rw 0 0&lt;br /&gt;
&lt;br /&gt;
If /dev/shm is not mounted you have 2 ways to solve issue:&lt;br /&gt;
1. execute following command inside VE (doesn&#039;t require VE reboot):&lt;br /&gt;
 mount -t tmpfs none /dev/shm&lt;br /&gt;
2. add following string to /etc/fstab inside VE and reboot it:&lt;br /&gt;
 tmpfs         /dev/shm        tmpfs           defaults        0 0&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
You can have a mounted but not running ve&lt;br /&gt;
Just:&lt;br /&gt;
 vzctl mount &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Debian can&#039;t get on the network ===&lt;br /&gt;
When a debian sys can’t get on the network, and you try:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set 1046 --ipadd 69.55.227.117&lt;br /&gt;
Adding IP address(es): 69.55.227.117&lt;br /&gt;
Failed to bring up lo.&lt;br /&gt;
Failed to bring up venet0.&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They probably removed iproute package, which must be the one from swsoft. To restore:&lt;br /&gt;
&amp;lt;pre&amp;gt;# dpkg -i --veid=1046 --admindir=/vz1/private/1046/root/var/lib/dpkg --instdir=/vz1/private/1046/root/ /vz/template/debian-3.0/iproute_20010824-8_i386.vz.deb&lt;br /&gt;
(Reading database ... 16007 files and directories currently installed.)&lt;br /&gt;
Preparing to replace iproute 20010824-8 (using .../iproute_20010824-8_i386.vz.deb) ...&lt;br /&gt;
Unpacking replacement iproute ...&lt;br /&gt;
Setting up iproute (20010824-8) ...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then restart their ve&lt;br /&gt;
&lt;br /&gt;
=== CentOS can&#039;t get on the network ===&lt;br /&gt;
&lt;br /&gt;
When CentOS system was running, but can&#039;t get on the network, they probably updated iproute via yum.&lt;br /&gt;
&lt;br /&gt;
Copy an older version of the iproute rpm to their VPS.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cp -p /root/iproute/iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm /vz2/private/&amp;lt;1527&amp;gt;/fs/root&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enter their ve, install rpm, restart network, and prevent yum from updating iproute.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ve &amp;lt;1527&amp;gt;&lt;br /&gt;
rpm -i --force iproute-2.6.32-23.el6.&amp;lt;i686&amp;gt;.rpm&lt;br /&gt;
rpm -q iproute&lt;br /&gt;
rpm -e &amp;lt;new iproute rpm&amp;gt;&lt;br /&gt;
service network restart&lt;br /&gt;
echo &amp;quot;exclude=iproute&amp;quot; &amp;gt;&amp;gt;/etc/yum/ip-exclude&lt;br /&gt;
echo &amp;quot;include=/etc/yum/ip-exclude&amp;quot; &amp;gt;&amp;gt;/etc/yum.conf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
in a ve i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /&lt;br /&gt;
du -h .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and get: 483M    .&lt;br /&gt;
&lt;br /&gt;
i do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.05a# df -h&lt;br /&gt;
Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
/dev/vzfs             4.0G  2.3G  1.7G  56% /&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
how can this be?&lt;br /&gt;
&lt;br /&gt;
Is it possible that quota file was corrupted somehow? Please try to:   &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl stop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota drop &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzquota init &amp;lt;VEID&amp;gt;&lt;br /&gt;
vzctl start &amp;lt;VEID&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
How to stop vz from starting after reboot:&lt;br /&gt;
&lt;br /&gt;
 VIRTUOZZO=no &lt;br /&gt;
in &lt;br /&gt;
 /etc/sysconfig/vz&lt;br /&gt;
&lt;br /&gt;
To start: &lt;br /&gt;
 service vz start&lt;br /&gt;
(after setting VIRTUOZZO=yes in /etc/sysconfig/vz)&lt;br /&gt;
&lt;br /&gt;
service vz restart will do some kind of &#039;soft reboot&#039; -- restart all&lt;br /&gt;
VPSes and reload modules without rebooting the node&lt;br /&gt;
&lt;br /&gt;
if you need to shut down all VPSes really really fast, run killall -9 init&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Postfix tip:&lt;br /&gt;
&lt;br /&gt;
You may want to tweak settings: default_process_limit=10&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* uptime/load that vz users see isn’t what we see- it&#039;s their own load&lt;br /&gt;
* Vz overwrites resolve.conf on every boot to what’s in ve config file. Also overwrites hostname&lt;br /&gt;
* Quotas in vz: set Gid_limit in vz config file and restart ve. Errors leading up to this will be out of files but no inode or space issues. set to 2000 in conf, and restart vz&lt;br /&gt;
&lt;br /&gt;
= Virtuozzo VPS Management Tools =&lt;br /&gt;
&lt;br /&gt;
== vm ==&lt;br /&gt;
&lt;br /&gt;
To create a new Linux VPS&lt;br /&gt;
&lt;br /&gt;
* get an IP address from mgmt screen&lt;br /&gt;
* run df -h to figure out which disk has enough space&lt;br /&gt;
* run vm with no arguments to get syntax and exact template name&lt;br /&gt;
* use col0xxxx for hostname if they don’t give you a hostname&lt;br /&gt;
* copy over dir, veid, ip and password to pending customer screen&lt;br /&gt;
&lt;br /&gt;
 vm CID IP hostname /vz[1|2] email[,email] template ( &amp;lt;LB|LBP|LS|LM|LMP|LP&amp;gt; [disk] | &amp;lt;gb_disk/mb_ram/gb_burst&amp;gt; )&lt;br /&gt;
&lt;br /&gt;
== disable ==&lt;br /&gt;
To disable a customers Virtuozzo VPS server.&lt;br /&gt;
&lt;br /&gt;
* stop the server&lt;br /&gt;
  vzctl stop &amp;lt;veid&amp;gt;&lt;br /&gt;
* prevent restart on server reboot and user restart via Control Panel&lt;br /&gt;
  vzctl set &amp;lt;veid&amp;gt; --disabled=yes --offline_management=no --save&lt;br /&gt;
&lt;br /&gt;
== cancelve ==&lt;br /&gt;
When a customer cancels all service.&lt;br /&gt;
&lt;br /&gt;
 cancelve &amp;lt;veid&amp;gt;&lt;br /&gt;
this will:&lt;br /&gt;
* stop a ve&lt;br /&gt;
* check for backups (offer to remove them from the backup server &lt;br /&gt;
and the backup.config)&lt;br /&gt;
* rename the private dir&lt;br /&gt;
* check for PTR, provide the commands to reset to default&lt;br /&gt;
* and rename the ve’s config&lt;br /&gt;
* remind you to remove firewall rules&lt;br /&gt;
* remind you to remove DNS entries&lt;br /&gt;
&lt;br /&gt;
== ipadd ==&lt;br /&gt;
 ipadd  &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
add’s ip(s) to a ve&lt;br /&gt;
&lt;br /&gt;
== ipdel ==&lt;br /&gt;
 ipdel &amp;lt;veid&amp;gt; &amp;lt;ip&amp;gt; [ip] [ip]&lt;br /&gt;
removes ip(s) from a ve&lt;br /&gt;
&lt;br /&gt;
== vc ==&lt;br /&gt;
 vc &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
display the contents of the vz config file for the &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;cat /vzconf/&amp;lt;veid&amp;gt;.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vl ==&lt;br /&gt;
 vl&lt;br /&gt;
&lt;br /&gt;
List all virtual environments running on this machine.&lt;br /&gt;
&lt;br /&gt;
will displays a list of ve #’s, 1 per line. (ostensibly to use in a for loop)&lt;br /&gt;
&lt;br /&gt;
== vp ==&lt;br /&gt;
 vp &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
show processes running under this &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzps auxww –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vpe ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vpe &amp;lt;veid&amp;gt; &lt;br /&gt;
this will allow you to do a vp when a ve is running out of control, the equivalent of (deprecated since vp operates outside the VPS): &lt;br /&gt;
&amp;lt;pre&amp;gt;vzctl set &amp;lt;veid&amp;gt; --kmemsize 2100000:2200000&lt;br /&gt;
vzctl exec &amp;lt;veid&amp;gt; ps auxw&lt;br /&gt;
vzctl set &amp;lt;veid&amp;gt; --kmemsize (ve’s orig lvalue):(ve’s orig hvalue)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vt ==&lt;br /&gt;
 vt &amp;lt;veid&amp;gt;&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vztop –E &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vr ==&lt;br /&gt;
 vr &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Restart a &amp;lt;veid&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl stop &amp;lt;veid&amp;gt;; vzctl start &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
You can run this even if the ve is down - the stop command will just fail&lt;br /&gt;
&lt;br /&gt;
== vs ==&lt;br /&gt;
 vs [veid]&lt;br /&gt;
displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve configured on the system (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;)&lt;br /&gt;
If passed an argument, gives the status for just that ve. &lt;br /&gt;
A running system looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 16066 exist mounted running&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running (but does exist) looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 9990 exist unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A ve which is not running and doesn’t exist looks like:&lt;br /&gt;
&amp;lt;tt&amp;gt;VEID 421 deleted unmounted down&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vs2 ==&lt;br /&gt;
 vs2 [veid]&lt;br /&gt;
this is similar to vs in that it displays status (output of &amp;lt;tt&amp;gt;vzctl status &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;) on each ve,&lt;br /&gt;
but the difference is it’s list comes from doing an ls on the data dirs. This was meant to catch &lt;br /&gt;
the rare case where a ve configured but exists. &lt;br /&gt;
&lt;br /&gt;
== vw ==&lt;br /&gt;
 vw [veid]&lt;br /&gt;
displays the output of ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;’ (the equivalent of &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; w&amp;lt;/tt&amp;gt;) for each configured ve (as determined by &amp;lt;tt&amp;gt;/vzconf/*.conf&amp;lt;/tt&amp;gt;). Useful for determing which ve is contributing to a heavily-loaded system.&lt;br /&gt;
If passed an argument, gives ‘&amp;lt;tt&amp;gt;w&amp;lt;/tt&amp;gt;‘ output for just that ve. &lt;br /&gt;
Ex:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@virt2 etc]# vw&lt;br /&gt;
134&lt;br /&gt;
 10:52pm  up 79 days,  6:14,  0 users,  load average: 0.02, 0.02, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16027&lt;br /&gt;
  2:52pm  up 7 days, 19:54,  0 users,  load average: 0.00, 0.00, 0.00&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&lt;br /&gt;
16055&lt;br /&gt;
  2:52pm  up 79 days,  6:38,  0 users,  load average: 0.00, 0.04, 0.07&lt;br /&gt;
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vwe ==&lt;br /&gt;
 vwe [constraint]&lt;br /&gt;
just like &amp;lt;tt&amp;gt;vw&amp;lt;/tt&amp;gt;, but takes a constraint as an argument, only show’s ve’s with loads &amp;gt;= the constraint provided. If no constraint is provided, 1 is used by default&lt;br /&gt;
&lt;br /&gt;
== vzs ==&lt;br /&gt;
 vzs [veid]&lt;br /&gt;
displays the beancounter status for all ve’s, or a particular ve if an argument is passed&lt;br /&gt;
&lt;br /&gt;
== ve ==&lt;br /&gt;
 ve &amp;lt;veid&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enter a &amp;lt;veid&amp;gt; as root.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl enter &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vx ==&lt;br /&gt;
 vx &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&lt;br /&gt;
&lt;br /&gt;
execute a single command in the &amp;lt;veid&amp;gt; environment.&lt;br /&gt;
&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;/usr/sbin/vzctl exec &amp;lt;veid&amp;gt; &amp;lt;command&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs [count]&lt;br /&gt;
a script which outputs a continuous report (or a certain number of reports if an option is passed) of processes stuck in the D state and which VPS’s those procs belong to.&lt;br /&gt;
&lt;br /&gt;
== setmem ==&lt;br /&gt;
 setmem VEID &amp;lt;256|512|768|1024|1280|1536|2048&amp;gt; [burst GB]&lt;br /&gt;
adjusts the memory resources for the VE. If this is on VZ ver &amp;gt;4, you will also be prompted to provide a burst ram amount. Sample usage:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# setmem 21671 1024 4&lt;br /&gt;
/usr/sbin/vzctl set 21671 --ram 1024m --swap 3072m --save&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== afacheck.sh ==&lt;br /&gt;
 afacheck.sh&lt;br /&gt;
displays the health/status of containers and mirrors on an adaptec card (currently quar1, tempvirt1-2, virt9, virt10)- all other are LSI&lt;br /&gt;
&lt;br /&gt;
== backup ==&lt;br /&gt;
 backup&lt;br /&gt;
backup script called nightly to update virt scripts and do customer backups&lt;br /&gt;
&lt;br /&gt;
== checkload.pl ==&lt;br /&gt;
 checkload.pl&lt;br /&gt;
this was intended to be setup as a cronjob to watch processes on a virt when the load &lt;br /&gt;
rises above a certain level. Not currently in use.&lt;br /&gt;
&lt;br /&gt;
== findbackuppigs.pl ==&lt;br /&gt;
 findbackuppigs.pl&lt;br /&gt;
looks for files larger than 50MB which customers have asked us to backup. Emails matches&lt;br /&gt;
to linux@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== gatherlinux.pl ==&lt;br /&gt;
 gatherlinux.pl&lt;br /&gt;
gathers up data about ve’s configured and writes to a file. Used for audits against the db&lt;br /&gt;
&lt;br /&gt;
== gb ==&lt;br /&gt;
 gb &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter&lt;br /&gt;
&lt;br /&gt;
== gbg ==&lt;br /&gt;
 gbg &amp;lt;search&amp;gt;&lt;br /&gt;
greps backup.config for the given search parameter and presents just the directories (for clean pasting)&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficgather.pl ==&lt;br /&gt;
 linuxtrafficgather.pl [yy-mm]&lt;br /&gt;
sends a traffic usage summary by ve to support@johncomapnies.com and payments@johncompanies.com.&lt;br /&gt;
Optional arguments are year and month (must be in the past). If not passed, assumes last month. Relies on &lt;br /&gt;
traffic logs created by netstatreset and netstatbackup&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch.pl ==&lt;br /&gt;
 linuxtrafficwatch.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo versions &amp;lt;= 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== linuxtrafficwatch2.pl ==&lt;br /&gt;
 linuxtrafficwatch2.pl&lt;br /&gt;
checks traffic usage and emails support@johncomapnies.com when a ve reaches the warning level (35G) and&lt;br /&gt;
the limit (40G). Works on virtuozzo version 2.6.x. We really aren’t using this anymore now that we have netflow.&lt;br /&gt;
&lt;br /&gt;
== load.pl ==&lt;br /&gt;
 load.pl&lt;br /&gt;
feeds info to load mrtg  - executed by inetd.&lt;br /&gt;
&lt;br /&gt;
== mb (linux) ==&lt;br /&gt;
 mb &amp;lt;mount|umount&amp;gt;&lt;br /&gt;
(nfs) mounts and umounts dirs to backup2. Shortcuts are mbm and mbu to mount and unmount. &lt;br /&gt;
&lt;br /&gt;
== migrate ==&lt;br /&gt;
 migrate &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is basically a wrapper for &amp;lt;tt&amp;gt;vzmigrate&amp;lt;/tt&amp;gt; – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was written cause vz virtuozzo version 2.6 had a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables. This script mitigates that. Since it makes multiple ssh connections to the target host, it’s a good idea to put the pub key for the src system in the authorized_keys file on the target host. In addition, it emails ve owners when their migration starts and stops (if they place email addresses in a file on their system: /migrate_notify). To move everyone off a system, you’d do:&lt;br /&gt;
 for f in `vl`; do migrate &amp;lt;ip&amp;gt; $f; done&lt;br /&gt;
&lt;br /&gt;
== migrateonline ==&lt;br /&gt;
 migrateonline &amp;lt;ip of node migrating to&amp;gt; &amp;lt;veid&amp;gt; &amp;lt;target_veid&amp;gt; [target dir: vz | vz1 | vz2]&lt;br /&gt;
this is the same as migrate but will migrate a ve in &amp;lt;tt&amp;gt;–online&amp;lt;/tt&amp;gt; mode which means it won’t be shut down at the end of the migration. This only works when migrating ve’s between 2 machines running a 2.6 kernel (currently tempvirt1-2. virt16-19, virt12). If you get an error that the machine you’re trying to migrate to has a different CPU or features, etc, then you have to edit the file and add the –f switch to the vzmigrate line- you can basically ignore this kind of warning (but never ignore a warning about missing templates on the destination node). NOTE: This edit (if made to migrateonline) will be overwritten by the base script during each night’s backup.&lt;br /&gt;
&lt;br /&gt;
== netstatbackup ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup &lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== netstatbackup2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatbackup2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== netstatreset ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset&lt;br /&gt;
writes traffic count data to a logfile and resets counters to 0. Works on virtuozzo versions 2.5.x &lt;br /&gt;
&lt;br /&gt;
== netstatreset2 ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 netstatreset2&lt;br /&gt;
writes traffic count data to a logfile. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== orphanedbackupwatchlinux ==&lt;br /&gt;
 orphanedbackupwatchlinux &lt;br /&gt;
looks for directories on backup2 which aren’t configured in backup.config and offers to &lt;br /&gt;
delete them&lt;br /&gt;
&lt;br /&gt;
== rsync.backup (linux) ==&lt;br /&gt;
 rsync.backup&lt;br /&gt;
does customer backups (relies on backup.config)&lt;br /&gt;
&lt;br /&gt;
== startvirt.pl ==&lt;br /&gt;
 startvirt.pl&lt;br /&gt;
forks off start ve commands – keeps 6 running at a time. This is not to be used on systems where fastboot is enabled as it circumvents the benefit of the fastboot. The script will occasionally not exit gracefully and will continue to use up CPU, so it should be watched. Also, don’t exit from the script till you’re sure all ve’s are started – if you do you need to start them manually and may have to free up locks. On some systems, the startvirt script doesn’t exit cleanly and you have to ^C out of it. Be careful though- doing so can leave some VE’s in an odd bootup state and you may need to ‘vr’ them manually. You should check to see which ve’s aren’t running and/or confirm all have started when ^C’ing out of startvirt.&lt;br /&gt;
&lt;br /&gt;
== taskdone (linux) ==&lt;br /&gt;
 taskdone&lt;br /&gt;
when called will email support@johncompanies.com with the hostname of the machine from which it was &lt;br /&gt;
executed as the subject&lt;br /&gt;
&lt;br /&gt;
== vb (linux) ==&lt;br /&gt;
 vb&lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vi /usr/local/sbin/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vemakeXX ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemakerh9 &lt;br /&gt;
ve create script for RH9 (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian30 &lt;br /&gt;
ve create script for debian 3.0 (Woody) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian31 &lt;br /&gt;
ve create script for debian 3.1 (Sarge) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakedebian40 &lt;br /&gt;
ve create script for debian 4.0 (Etch) (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakefedora, vemakefedora2, vemakefedora4, vemakefedora5, vemakefedora6, vemakefedora7&lt;br /&gt;
ve create script for fedora core 1, 2, 4, 5, 6, 7 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakecentos3, vemakecentos4&lt;br /&gt;
ve create script for centos 3, 4 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakesuse, vemakesuse93, vemakesuse100&lt;br /&gt;
ve create script for suse 9.2, 9.3, 10.0 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
 vemakeubuntu5, vemakeubuntu606, vemakeubuntu606 vemakeubuntu704&lt;br /&gt;
ve create script for ubuntu 5.10, 6.06, 6.10, 7.04 respectively (see vemake)&lt;br /&gt;
&lt;br /&gt;
== vemove ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vemove &amp;lt;veid&amp;gt; &amp;lt;target_ip&amp;gt; &amp;lt;/vz/private/123&amp;gt;&lt;br /&gt;
this script simplifies the old way of moving ve’s from one system to another - in short moving a ve to or from a virt running virtuozzo &amp;lt; 2.6.x&lt;br /&gt;
It’s the equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;tar cfpP - &amp;lt;veid&amp;gt; --ignore-failed-read | (ssh -2 -c arcfour &amp;lt;target_ip&amp;gt; &amp;quot;split - -b 1024m &amp;lt;/vz/private/123&amp;gt;.tar&amp;quot; )&amp;lt;/tt&amp;gt;This should only be used if migrate/vzmigrate can’t be used. &lt;br /&gt;
&lt;br /&gt;
== vim.watchdog ==&lt;br /&gt;
 vim.watchdog &lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu. Works on virtuozzo versions 2.5.x&lt;br /&gt;
&lt;br /&gt;
== vim.watchdog2 ==&lt;br /&gt;
 vim.watchdog2&lt;br /&gt;
looks for and kills procs matching “vi|vim|nano|pine|elm” that are running for a long time &amp;amp; consuming lots of cpu.&lt;br /&gt;
Works on virtuozzo versions 2.6.x.&lt;br /&gt;
&lt;br /&gt;
== vzmigrate ==&lt;br /&gt;
 vzmigrate &amp;lt;target_ip&amp;gt; -r no &amp;lt;veid&amp;gt;:[dst veid]:[dst /vzX/private/veid]:[dst /vzX/root/veid]&lt;br /&gt;
(this is the raw command “wrapped” by migrate/migrateonline) this will seamlessly move a ve from one host to another. The ve will run for the duration of the migration till the very end when it’s shut down, ip moved and started up on the target system. The filesystem on the src will remain. This should be watched – occasionally the move will timeout and leave the system shut down. If target private and root aren’t specified it just puts it in /vz. Only works when both systems are running virtuozzo 2.6.x&lt;br /&gt;
&lt;br /&gt;
== vztrafdump.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions &amp;lt;= 2.5.x. &lt;br /&gt;
&lt;br /&gt;
== vztrafdump2.sh ==&lt;br /&gt;
DEPRECATED&lt;br /&gt;
 vztrafdump2.sh&lt;br /&gt;
writes traffic usage info by ve to a file called jc_traffic_dump in each ve’s / dir. Works on virtuozzo versions 2.6.x. &lt;br /&gt;
&lt;br /&gt;
== addtun ==&lt;br /&gt;
 addtun &amp;lt;veid&amp;gt;&lt;br /&gt;
Add’s tun device to ve.&lt;br /&gt;
&lt;br /&gt;
== bwcap ==&lt;br /&gt;
 bwcap &amp;lt;veid&amp;gt; &amp;lt;kbps&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;bwcap 1234 512&amp;lt;/tt&amp;gt;&lt;br /&gt;
Caps a VE’s bandwidth to the amount given&lt;br /&gt;
&lt;br /&gt;
== setdisk ==&lt;br /&gt;
 setdisk &amp;lt;veid&amp;gt; &amp;lt;diskspace in GB&amp;gt;&lt;br /&gt;
Ex: &amp;lt;tt&amp;gt;setdisk 1234 5&amp;lt;/tt&amp;gt;&lt;br /&gt;
Gives a VE’s a given amount of disk space&lt;br /&gt;
&lt;br /&gt;
== vdf ==&lt;br /&gt;
 vdf &amp;lt;veid&amp;gt; &lt;br /&gt;
the equivalent of: &amp;lt;tt&amp;gt;vzctl exec &amp;lt;veid&amp;gt; df –h&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== vdff ==&lt;br /&gt;
 vdff&lt;br /&gt;
runs a (condensed) vdf for all ve’s in your pwd (must be run from /vz/privateN)&lt;br /&gt;
&lt;br /&gt;
== mvbackups ==&lt;br /&gt;
 mvbackups &amp;lt;veid&amp;gt; &amp;lt;target_machine&amp;gt; (virt1) &amp;lt;target_dir&amp;gt; (vz1)&lt;br /&gt;
moves backups from one location to another on the backup server, and provides you with option to remove entries from current backup.config, and simple paste command to add the config to backup.config on the target server&lt;br /&gt;
&lt;br /&gt;
== checkquota ==&lt;br /&gt;
 checkquota&lt;br /&gt;
for all the ve’s in the cwd (run from /vz/private, /vz1/private, etc) reports what vz quota says they’re using and what the actual usage is (as reported by du)&lt;br /&gt;
&lt;br /&gt;
== clearquota ==&lt;br /&gt;
 clearquota &amp;lt;veid&amp;gt;&lt;br /&gt;
Recalculates a ve’s quota, prints out the usage before and after. The equivalent of:&lt;br /&gt;
&amp;lt;tt&amp;gt;vdf &amp;lt;veid&amp;gt;; v stop &amp;lt;veid&amp;gt;; vzquota drop &amp;lt;veid&amp;gt;; v start &amp;lt;veid&amp;gt;; vdf &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== dprocs ==&lt;br /&gt;
 dprocs&lt;br /&gt;
Sometimes the server’s have a large number of processes get stuck in the D state- this script shows (every 3 secs) which VE’s have D procs, which procs&lt;br /&gt;
are stuck and a running average of the top “offenders”&lt;br /&gt;
&lt;br /&gt;
== vzstat ==&lt;br /&gt;
 vstat&lt;br /&gt;
sort of like top for VZ. sort VEs by CPU usage by pressing &#039;o&#039; and then &#039;c&#039; keys&lt;br /&gt;
&lt;br /&gt;
== stopvirt ==&lt;br /&gt;
 stopvirt&lt;br /&gt;
will stop VEs as fast as it can, 6 at a time. May not exit when complete so you should watch [[#vzstat|vzstat]] in another window.&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Infrastructure_Machines&amp;diff=2213</id>
		<title>Infrastructure Machines</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Infrastructure_Machines&amp;diff=2213"/>
		<updated>2020-03-17T18:25:04Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* virt11 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= jails =&lt;br /&gt;
&lt;br /&gt;
== jail1 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, SHUTDOWN&lt;br /&gt;
* OS: FreeBSD 6.2 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.101 (PCI nic), Pub IP: 69.55.230.107 (onboard)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: gvinum&lt;br /&gt;
&lt;br /&gt;
== jail2 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-16&lt;br /&gt;
* OS: FreeBSD 7.2 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.102, Pub IP: 69.55.228.53 (2 onboard nics)&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, two 300 GB (4 x 300GB) RAID1 arrays running on an LSI-based, Dell-branded (PERC 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.232 &lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
== jail3 ==&lt;br /&gt;
* Location: I2b  SHUTDOWN&lt;br /&gt;
* OS: FreeBSD 8.3 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.2.103, Pub IP: 69.55.229.7 (2 onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SATA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 300 GB (2 x 300GB) RAID1 array running on a 3ware 8006-2LP RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* &#039;&#039;&#039;We should not add users to this server since it is at I2B&#039;&#039;&#039;&lt;br /&gt;
* must be ssh&#039;d to from nat2&lt;br /&gt;
* is a super jail for customer col01737&lt;br /&gt;
&lt;br /&gt;
== jail4 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-17&lt;br /&gt;
* OS: FreeBSD 9.1 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.104, Pub IP: 69.55.228.104 (2 onboard nics) &lt;br /&gt;
* Hardware: Dell 2850. 6 x 300GB SCSI drives (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply.&lt;br /&gt;
* CPU:  2 x Intel(R) Xeon(TM) CPU 2.80GHz (8 virtual CPUs)&lt;br /&gt;
* RAM:  16 GB ( 4 x 4GB Reg ECC )&lt;br /&gt;
* Drives: one 1.4 TB RAID 5 array (6 x 300GB SCSI) Dell-branded (PERC 4e)LSI megarc RAID card.&lt;br /&gt;
* Remote management: None &lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
Only FreeBSD 9.1 jail&lt;br /&gt;
Not upgraded to FBSD 9.2 or 9.3 because too many libraries modified (would require customers to rebuild apps).&lt;br /&gt;
&lt;br /&gt;
== jail5 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: FreeBSD 10.1 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.105, Pub IP: 69.55.230.105 (2 onboard nics) &lt;br /&gt;
* Hardware: Supermicro JC-14004 - Intel S1200BTL motherboard - 6 SATA/SAS drive bays (2 colums of 3), Dual power supply.&lt;br /&gt;
* CPU:  1 x Intel(R) Xeon(TM) E3-1230 V2 CPU 3.30 GHz (8 virtual CPUs)&lt;br /&gt;
* RAM:  32 GB ( 4 x 8GB ECC )&lt;br /&gt;
* Drives: 1x80 GB SATA SSD on motherboard + one 2.6 TB RAID 5 array 4x1 TB + 3ware 9650 RAID card.&lt;br /&gt;
* Remote management: Intel RMM 4 - 10.1.4.235&lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
Only FreeBSD 10.1 jail used for bhyve virtuals.&lt;br /&gt;
&lt;br /&gt;
Use ~+Ctrl-D to disconnect from console (vm attach colXXXXX).&lt;br /&gt;
&lt;br /&gt;
== jail6 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-16&lt;br /&gt;
* OS: FreeBSD 10.3 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.106, Pub IP: 69.55.230.106 (2 onboard nics) &lt;br /&gt;
* Hardware: Supermicro JC-14004 - Intel S1200BTL motherboard - 6 SATA/SAS drive bays (2 colums of 3), Dual power supply.&lt;br /&gt;
* CPU:  1 x Intel(R) Xeon(TM) E3-1230 V2 CPU 3.30 GHz (8 virtual CPUs)&lt;br /&gt;
* RAM:  32 GB ( 4 x 8GB ECC )&lt;br /&gt;
* Drives: one 2.7 TB ZFS RAID 6 array 5x1 TB &lt;br /&gt;
* Remote management: Intel RMM 4 - 10.1.4.236&lt;br /&gt;
* Disk accounting: zfs&lt;br /&gt;
&lt;br /&gt;
== jail7 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-5  SCHEUDLED SHUTDOWN 9/30/19&lt;br /&gt;
* OS: FreeBSD 6.3 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.107, Pub IP: 69.55.230.108 (2 onboard nics)&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 146 GB (4 x 146GB) RAID1 arrays, one 74 GB (2 x 74GB) RAID1 array running on an LSI-based, Dell-branded (PERC 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.237&lt;br /&gt;
* Disk accounting: gvinum&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
Do not run a verify while OS/jails running, will crash.&lt;br /&gt;
&lt;br /&gt;
== jail8 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: FreeBSD 8.0 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.108, Pub IP: 69.55.234.2 (2 onboard nics)&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 300 GB (2 x 300GB) RAID1 array, one 400 GB (2 x 400GB) RAID1 array, running on an LSI-based, Dell-branded (PERC 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.238&lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== jail9 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: FreeBSD 8.2 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.109, Pub IP: 69.55.232.36 (2 onboard nics)&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 400 GB (2 x 300GB) RAID1 array running on an LSI-based, Dell-branded (PERC 5/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.239&lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== jail11 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 4.7 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.111 (PCI nic), Pub IP: 69.55.236.92 (onboard)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: vinum&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mx1 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, SHUTDOWN AND SCRAPPED&lt;br /&gt;
* OS: FreeBSD 4.11 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.201 (PCI nic), Pub IP: 69.55.237.3 (onboard)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 36 GB (2 x 36GB) RAID1 array, one 74 GB (2 x 74GB) RAID1 array running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: vinum&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* is our (old) backup mail/dns vps service host&lt;br /&gt;
&lt;br /&gt;
== mx2 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, SHUTDOWN AND SCRAPPED&lt;br /&gt;
* OS: FreeBSD 7.1 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.202 (PCI nic), Pub IP: 69.55.237.90 (onboard)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Disk accounting: gvinum&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* is our latest backup mail/dns vps service host&lt;br /&gt;
&lt;br /&gt;
== jail17 ==&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 4.10 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.117 (PCI nic), Pub IP: 69.55.228.2 (onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCA SCSI drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays, one 146 GB (2 x 146GB) RAID1 array, running on an Adaptec 2120S  RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: vinum&lt;br /&gt;
* Host of devweb.johncompanies.com and www.utopian.com/mail.utopian.com&lt;br /&gt;
&lt;br /&gt;
== jail18 ==&lt;br /&gt;
* Location: castle, cab 3-5  SCHEDULED SHUTDOWN 9/30/19&lt;br /&gt;
* OS: FreeBSD 4.10 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.118 (PCI nic), Pub IP: 69.55.228.2 (onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCA SCSI drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays, one 146 GB (2 x 146GB) RAID1 array, running on an Adaptec 2120S  RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: vinum&lt;br /&gt;
* Host of ns2c.johncompanies.com (now on ns2c.johncompanies.com on ganeti)&lt;br /&gt;
&lt;br /&gt;
== jail19 ==&lt;br /&gt;
* Location: castle, cab 3-5  SCHEDULED SHUTDOWN 9/30/19&lt;br /&gt;
* OS: FreeBSD 6.1 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.119 (PCI nic), Pub IP: 69.55.228.200 (onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCA SCSI drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 74 GB (2 x 74GB) RAID1 array, one 146 GB (2 x 146GB) RAID1 array, running on an Adaptec 2120S  RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: gvinum&lt;br /&gt;
&lt;br /&gt;
= virts =&lt;br /&gt;
&lt;br /&gt;
== quar1 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, SHUTDOWN AND SCRAPPED&lt;br /&gt;
* OS: RedHat 7.3 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.151 (PCI nic), Pub IP: 69.55.227.2 (onboard nic)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 36 GB (2 x 36GB) RAID1 array, one 74 GB (2 x 74GB) RAID1 array, running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Virtuozzo version: 2.6.1&lt;br /&gt;
* VZ license: hwid=23C0.C0E1.6FDD.08BA.8971.8E1C.EBD5.1EDC serial=0DE6.903E.E239.E23F.470C.4369.4104.A5A4&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* used to be the home of customers who&#039;s VE&#039;s would just run out of control/badly&lt;br /&gt;
* has a max of 10 VE&#039;s allowed to run&lt;br /&gt;
&lt;br /&gt;
== virt9 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: RedHat 7.3 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.59 (PCI nic), Pub IP: 69.55.226.161 (onboard nic)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (2 x 74GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Virtuozzo version: 2.6.1&lt;br /&gt;
* VZ license: hwid=BC15.B4D6.0D25.A5FE.F3BA.D518.E351.AE3F serial=F6AD.B6B4.5650.8869.C97C.73EE.AF65.FA8B&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== virt11 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: CentOS 5.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.61, Pub IP: 69.55.238.3, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 400 GB (2 x 400GB) RAID1 array, one 146 GB (2 x 146GB) RAID1 array, running on an LSI-based, Dell-branded (perc 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.211&lt;br /&gt;
* Virtuozzo version: 4.0.0&lt;br /&gt;
* VZ license: hwid=&amp;quot;029D.A187.78E1.480F.49E3.E20A.7389.7F79&amp;quot; serial=&amp;quot;163C.F3E2.195F.96B5.2D38.8937.9600.4A05&amp;quot;  key_number=&amp;quot;VZ.00172378.0006&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== virt12 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: CentOS 5.2 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.62, Pub IP: 69.55.227.70, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 300 GB (2 x 300GB) RAID1 arrays one 400 GB (2 x 400GB) RAID1 array, running on an LSI-based, Dell-branded (perc 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.212&lt;br /&gt;
* Virtuozzo version: 4.0.0&lt;br /&gt;
* VZ license: hwid=&amp;quot;0C53.A413.E095.B4F4.51BC.D740.6919.A77B&amp;quot; serial=&amp;quot;84E5.9498.3759.E683.E24B.2514.CA72.DC31&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== virt13 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-17&lt;br /&gt;
* Switch port:  P13-&lt;br /&gt;
* OS: CentOS 6.2 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.63, Pub IP: 69.55.226.2, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* CPU:    2 x Intel(R) Xeon(R) CPU E5420  @ 2.50GHz  (8 virtual cores)&lt;br /&gt;
* RAM:    32 GB (8 x 4GB DDR2 FB-DIMM ECC 667MHz)&lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 600 GB (2 x 600GB) RAID1 array, one 146 GB (2 x 146GB) RAID1 array, running on an LSI-based, Dell-branded (perc 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.213&lt;br /&gt;
* Virtuozzo version: 4.7.0&lt;br /&gt;
* VZ license: hwid=&amp;quot;7D07.93BE.0B1F.7D2B.B039.4B5B.48B6.453B&amp;quot; serial=&amp;quot;60A4.A94C.44BB.DCD6.8D03.1778.605B.10FE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* home to our latest/current signups&lt;br /&gt;
* currently the only 64bit vz host&lt;br /&gt;
&lt;br /&gt;
== virt14 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-16&lt;br /&gt;
* Switch Port: p13-&lt;br /&gt;
* OS: CentOS 6.4 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.64 Pub IP: 69.55.225.14 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply.&lt;br /&gt;
* CPU: 2 x Xeon 5140 Dual Core @ 2.33GHz (4 virtual CPUs)&lt;br /&gt;
* RAM: 32 GB  (8 x 4GB Reg ECC)&lt;br /&gt;
* Drives: one 146 GB (2 x 146 GB SAS) RAID1 array, and one 1TB RAID1 array (2 x 1 TB SATA), running on an LSI-based, Dell-branded (perc 5/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.214&lt;br /&gt;
* Virtuozzo version: 4.7.0&lt;br /&gt;
* VZ license: hwid=&amp;quot;&amp;quot; serial=&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* our latest virt &lt;br /&gt;
* Temp server to offload Virt13 till we can get a Cloud going.&lt;br /&gt;
* virt 13 and 14 currently the only 64bit vz hosts&lt;br /&gt;
&lt;br /&gt;
== virt15 ==&lt;br /&gt;
&lt;br /&gt;
* Location: SHUTDOWN&lt;br /&gt;
* OS: RedHat 9 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.65, Pub IP: 69.55.232.160 (2 onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 74 GB (2 x 74GB) RAID1 array, two 146 GB (2 x 146GB) RAID1 arrays, running on an LSI MegaRAID SCSI 320-1 RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Virtuozzo version: 2.6.2&lt;br /&gt;
* VZ license: hwid=A90F.6F48.E723.D8BA.3025.184A.5B73.D11E serial=E94B.5164.C1E6.A67F.67D1.7D96.0B6C.5524&lt;br /&gt;
&lt;br /&gt;
== virt16 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: Fedora Core 4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.66, Pub IP: 69.55.232.2 (2 onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 74 GB (2 x 74GB) RAID1 array, one 146 GB (2 x 146GB) RAID1 array, running on an LSI MegaRAID SCSI 320-1 RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Virtuozzo version: 3.0.0&lt;br /&gt;
* VZ license: hwid=DEFA.A325.7230.BBC8.9715.8B52.3FD7.27BE serial=66C0.41EA.3FBB.11D3.9CC6.55C7.09AE.14AB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== virt17 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: CentOS 4.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.67, Pub IP: 69.55.232.162, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 146 GB (2 x 146GB) RAID1 arrays running on an LSI-based, Dell-branded (perc 5/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.217&lt;br /&gt;
* Virtuozzo version: 3.0.0&lt;br /&gt;
* VZ license: hwid=2E14.AED9.70B8.C26E.D99F.B0D3.BCD2.229C serial=2A11.DAD0.61DB.E889.8DF4.9AF7.CF82.3C37&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== virt19 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: CentOS 5.2 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.69, Pub IP: 69.55.236.2, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 300 GB (2 x 300GB) RAID1 array, running on an LSI-based, Dell-branded (perc 5/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.219&lt;br /&gt;
* Virtuozzo version: 3.0.0&lt;br /&gt;
* VZ license: hwid=3968.13F7.B2AC.8952.8E19.13A9.6EF5.5822 serial=061D.84CD.CCE5.B213.15B5.C061.D6A7.B034&lt;br /&gt;
&lt;br /&gt;
= mail =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine (mail) is the swiss army knife of the company, playing host to many services and functions. &lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 4.10 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.5, Pub IPs: 69.55.230.2, 69.55.225.225 (ns1c jail), 69.55.230.9. 1 onboard and 1 PCI&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 36 GB (2 x 36GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* mail&lt;br /&gt;
* web&lt;br /&gt;
* mysql&lt;br /&gt;
* bigbrother server/pager&lt;br /&gt;
* snmp&lt;br /&gt;
* named in a jail (ns1c)&lt;br /&gt;
&lt;br /&gt;
== email ==&lt;br /&gt;
This server hosts mail for johncompanies.com (mail.johncompanies.com). Sendmail 8.13.6/8.13.6 is listening on 69.55.230.2 port 25 for incoming mail. Relaying is allowed per /etc/mail/relay-domains&lt;br /&gt;
Other addresses (aliases) are defined per /etc/mail/aliases&lt;br /&gt;
&lt;br /&gt;
The following active users have mail hosted on this server:&lt;br /&gt;
* dave&lt;br /&gt;
* linux&lt;br /&gt;
* support&lt;br /&gt;
* payments&lt;br /&gt;
* sales&lt;br /&gt;
* tech1&lt;br /&gt;
* info&lt;br /&gt;
&lt;br /&gt;
Traditionally, mail is checked via shell apps (pine). qpopper (pop3s) is running to allow mail downloading. Checking mail in this way causes an opened INBOX in pine to lock read-only. For this reason, we tee incoming mail to support and linux to tech1.&lt;br /&gt;
&lt;br /&gt;
Procmail rules are setup to filter spam and send text messages. They are enabled for info, support, linux, tech1, dave and can be found in ~/Procmail/, for example:&lt;br /&gt;
&amp;lt;pre&amp;gt;# more ~support/Procmail/rc.emergency&lt;br /&gt;
:0c # use c only if you want to forward a copy and file the original later&lt;br /&gt;
* ^Subject:.*\&amp;lt;emergency\&amp;gt;&lt;br /&gt;
* ! ^Subject:\&amp;lt;re\&amp;gt;&lt;br /&gt;
  {&lt;br /&gt;
   :0h&lt;br /&gt;
   FROMANDSUBJECT=|formail -XFrom: -XSubject:&lt;br /&gt;
&lt;br /&gt;
   :0fwh&lt;br /&gt;
   | /usr/local/bin/formail -I&amp;quot;Subject: &amp;quot; -I&amp;quot;To: pager@johncompanies.com&amp;quot; ; echo $FROMANDSUBJECT ; echo&lt;br /&gt;
&lt;br /&gt;
   :0&lt;br /&gt;
  ! -t&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
control: &amp;lt;tt&amp;gt;cd /etc/mail; make stop&amp;lt;/tt&amp;gt; (stop), &amp;lt;tt&amp;gt;cd /etc/mail; make start&amp;lt;/tt&amp;gt; (start)&lt;br /&gt;
&lt;br /&gt;
The following aliases are also in place:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;debian:         linux&lt;br /&gt;
jobs:   info&lt;br /&gt;
careers:        info&lt;br /&gt;
#reboot:         6128102202@txt.att.net&lt;br /&gt;
#reboot:         8582298897@vtext.com&lt;br /&gt;
reboot:         pager&lt;br /&gt;
#pager: 8582298897@vtext.com&lt;br /&gt;
pager:  4158718324@txt.att.net&lt;br /&gt;
tech1on:  &amp;quot;| /usr/local/sbin/tech1on.sh&amp;quot;&lt;br /&gt;
tech1off:  &amp;quot;| /usr/local/sbin/tech1off.sh&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change them, edit &amp;lt;tt&amp;gt;/etc/aliases&amp;lt;/tt&amp;gt; and then run &amp;lt;tt&amp;gt;newaliases&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note on tech1: this address was setup as a read-only address to be mirrored on all email coming into support and linux. We set this up so we could easily check support mail via a pop client- popping email locks out the user in pine so checking support/linux directly via pop was not an option. When checking and responding to email that comes into tech1, care should be taken to make sure it is sent as/under an address other than tech1. This is cause tech1 is not monitored by support staff as closely as email to support/linux. Further, the tech on call may not be checking tech1. Lastly, because of the nature of the copying, you will sometimes notice certain automated email/notices are received 2x in support- this is because of/related to the tech1 mirror.&lt;br /&gt;
&lt;br /&gt;
To enable it (on mail, run):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;~support/tech1on.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To disable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;~support/tech1off.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or via email:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;tech1on@johncompanies.com&lt;br /&gt;
tech1off@johncompanies.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IP Blocking ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
01000 deny ip from 188.92.72.5 to any&lt;br /&gt;
01003 deny ip from any to 122.49.31.50&lt;br /&gt;
01004 deny ip from 122.49.31.50 to any&lt;br /&gt;
01014 deny ip from 74.208.225.225 to any&lt;br /&gt;
01015 deny ip from any to 216.243.118.35&lt;br /&gt;
01016 deny ip from 216.243.118.35 to any&lt;br /&gt;
01017 deny ip from any to 216.243.118.36&lt;br /&gt;
01018 deny ip from 216.243.118.36 to any&lt;br /&gt;
01020 deny ip from 112.215.0.0/18 to any   2014-08-13 Blocked PT Excelcomindo Pratama (Indonesia) for fradulent credit card attempts &lt;br /&gt;
01020 deny ip from 112.215.64.0/20 to any  2014-08-13 Blocked PT Excelcomindo Pratama (Indonesia) for fradulent credit card attempts &lt;br /&gt;
01022 deny ip from 120.168.0.0/24 to any   2014-08-13 Blocked Indosat 3G Broadband (Indonesia) for fradulent credit card attempts&lt;br /&gt;
01022 deny ip from 120.175.213.0/24 to any 2014-08-13 Blocked Indosat 3G Broadband (Indonesia) for fradulent credit card attempts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== web == &lt;br /&gt;
&lt;br /&gt;
See [[Management_System_/_Public_Website_/_Signup|Management System / Public Website / Signup]]&lt;br /&gt;
&lt;br /&gt;
== mysql ==&lt;br /&gt;
mysql 4.1.22 is running on port 3306&lt;br /&gt;
&lt;br /&gt;
* datadir: &amp;lt;tt&amp;gt;/mnt/data1/db/mysql/&amp;lt;/tt&amp;gt;&lt;br /&gt;
* config: &amp;lt;tt&amp;gt;/etc/my.cnf&amp;lt;/tt&amp;gt;&lt;br /&gt;
* database: &amp;lt;tt&amp;gt;jc&amp;lt;/tt&amp;gt;&lt;br /&gt;
* control: &amp;lt;tt&amp;gt;/usr/local/etc/rc.d/mysql-server.sh stop&amp;lt;/tt&amp;gt; (stop), &amp;lt;tt&amp;gt;/usr/local/etc/rc.d/mysql-server.sh start&amp;lt;/tt&amp;gt; (start)&lt;br /&gt;
&lt;br /&gt;
== bigbrother ==&lt;br /&gt;
There is a client running on mail (which monitors the services running on mail and mail itself), installed under &amp;lt;tt&amp;gt;/usr/home/bb/bbc1.9e-btf&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
And the big brother pager/server (which displays information gathered from all bb-monitored machines, including mail) is installed under &amp;lt;tt&amp;gt;/usr/home/bb/bbsrc/bb1.9i-btf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Both are running under the user &amp;lt;tt&amp;gt;bb&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Refer to [[BigBrother]] for more about use.&lt;br /&gt;
&lt;br /&gt;
== DNS (ns1c.johncompanies.com) ==&lt;br /&gt;
ns1c is a jail running on the mail server, who&#039;s IP is 69.55.225.225&lt;br /&gt;
&lt;br /&gt;
It&#039;s running from &amp;lt;tt&amp;gt;/mnt/data1/ns1c-dir&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See [[DNS]] for more details&lt;br /&gt;
&lt;br /&gt;
== Usage and Notes ==&lt;br /&gt;
* always mounted to backup1 and backup2 via nfs: &lt;br /&gt;
&amp;lt;pre&amp;gt;backup2:/mnt/data1 on /backup (nfs)&lt;br /&gt;
backup2:/mnt/data2 on /backup2 (nfs)&lt;br /&gt;
backup2:/mnt/data3 on /backup3 (nfs)&lt;br /&gt;
backup2:/mnt/data4 on /backup4 (nfs)&lt;br /&gt;
backup1:/data on /backup1 (nfs)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
 * * * * * /usr/local/www/mgmt/mrtg/mrtg.sh &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
Gathers up data for our mrtg/load graphs&lt;br /&gt;
&lt;br /&gt;
 */5 * * * * /usr/local/bin/rsync -a root@nat2:/mnt/data1/mrtg/data/ /usr/local/www/mgmt/mrtg/data/&lt;br /&gt;
Gathers up data from i2b servers for our mrtg/load graphs&lt;br /&gt;
&lt;br /&gt;
 40 0 * * * /usr/local/bin/rsync -a root@nat2:&amp;quot;/mnt/data1/mrtg/*.cfg&amp;quot; /usr/local/www/mgmt/mrtg&lt;br /&gt;
Gathers up mrtg configuration (port names) from i2b switches for our mrtg/load graphs&lt;br /&gt;
&lt;br /&gt;
 41 0 * * * for f in `grep -l &amp;quot;mnt\/data1&amp;quot; /usr/local/www/mgmt/mrtg/switch-p*.cfg`; do cat $f | sed s#\/mnt\/data1#\/usr\/local\/www\/mgmt# &amp;gt; $f.new; mv $f.new $f; done&lt;br /&gt;
Gathers up mrtg configuration (port names) from castle switches for our mrtg/load graphs&lt;br /&gt;
&lt;br /&gt;
 1 0 1 * * cp /usr/local/www/mgmt/html/top20ip /usr/local/www/mgmt/html/top20ip_last&lt;br /&gt;
 1 0 1 * * cp /usr/local/www/mgmt/html/top20customers /usr/local/www/mgmt/html/top20customers_last&lt;br /&gt;
 2 * * * * /usr/local/www/cronjobs/top20ip.pl &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
 15 * * * * /usr/local/www/cronjobs/top20customer.pl &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
 1 0 1 * * rm /usr/local/www/mgmt/html/bandtrack&lt;br /&gt;
Archiving and generation of bandwidth statistics presented in mgmt -&amp;gt; Reference -&amp;gt; Bandwidth&lt;br /&gt;
&lt;br /&gt;
 1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
Nightly backup script&lt;br /&gt;
&lt;br /&gt;
 0 1 * * * /usr/local/www/mgmt/awstats/wwwroot/cgi-bin/awstats.pl -config=jcpub -update&lt;br /&gt;
Public web traffic stats&lt;br /&gt;
&lt;br /&gt;
 15 0 * * * rm /usr/local/www/mgmt/bwgraphs/*.png&lt;br /&gt;
 16 0 * * * rm /usr/local/www/am/bwgraphs/*&lt;br /&gt;
Cleanup for graph-related temp data generated by customers using the bandwidth reports via the AM &lt;br /&gt;
&lt;br /&gt;
 10 0 1 * * /usr/local/www/cronjobs/monthly_bandwidth_report.pl&lt;br /&gt;
Monthly bandwidth overage report&lt;br /&gt;
&lt;br /&gt;
 */3 * * * * /usr/local/www/cronjobs/bbcheck.pl&lt;br /&gt;
Updates mgmt with bb monitoring issues&lt;br /&gt;
&lt;br /&gt;
 5 0 * * * /usr/local/www/cronjobs/shutdownreminder.pl&lt;br /&gt;
Emails customers reminding them of upcoming shutdown date&lt;br /&gt;
&lt;br /&gt;
 7 0 * * * /usr/local/www/cronjobs/invoice_email.pl&lt;br /&gt;
Emails customers who have invoices and are set to auto-email (currently no customer gets these)&lt;br /&gt;
&lt;br /&gt;
 8 */4 * * * /usr/local/www/cronjobs/mysqlrepchk.pl&lt;br /&gt;
Checking that we are properly replicating (mysql) traffic data from bwdb to backup1&lt;br /&gt;
&lt;br /&gt;
 16 0 1 * * /usr/local/www/cronjobs/purge_traffic.pl&lt;br /&gt;
Removed old traffic data from the traffic database (running on backup1)&lt;br /&gt;
&lt;br /&gt;
 */5 * * * * chmod 0700 /usr/local/www/ccard_orders/* &amp;amp;&amp;amp; mv /usr/local/www/ccard_orders/* /usr/local/www/ccard_orders/done&lt;br /&gt;
Secure credit card data: set root-read-only &lt;br /&gt;
&lt;br /&gt;
 25 0 * * * /usr/local/www/cronjobs/biller.pl&lt;br /&gt;
Enters service charges in customer billing ledgers&lt;br /&gt;
&lt;br /&gt;
 10 13 * * * /usr/local/www/cronjobs/pfp_batch_gather.pl&lt;br /&gt;
Looks for customers with balance due and active credit card on file, prepares a payflow batch&lt;br /&gt;
&lt;br /&gt;
 10 14 * * * /usr/local/www/cronjobs/pfp_batch_process.pl&lt;br /&gt;
Tries to collect ccard funds for items in payflow batch - communicates with payflow&lt;br /&gt;
&lt;br /&gt;
 15 13 * * * /usr/local/www/cronjobs/pb_batch_gather.pl&lt;br /&gt;
Looks for customers with balance due and active paypal billing agreement on file, prepares a paypal batch&lt;br /&gt;
&lt;br /&gt;
 15 14 * * * /usr/local/www/cronjobs/pb_batch_process.pl&lt;br /&gt;
Tries to collect paypal funds for items in paypal batch - communicates with paypal&lt;br /&gt;
&lt;br /&gt;
 0 7 * * 1 /usr/local/www/cronjobs/email_pmt_reminder.pl&lt;br /&gt;
Emails customers in arrears, reminding them to pay&lt;br /&gt;
&lt;br /&gt;
 0 0 1 * * /usr/bin/mail -s &#039;archive sent mail in pine&#039; support@johncompanies.com &amp;lt; /dev/null&lt;br /&gt;
Reminds us to archive sent mail&lt;br /&gt;
&lt;br /&gt;
 0 3 * * * /usr/local/bin/rsync -a isys.e-monitoring.net:/var/mail /backup2/isys; /usr/local/bin/rsync -a isys.e-monitoring.net:/usr/home /backup2/isys&lt;br /&gt;
Backup data on isys&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Adaptec_Controllers|Check RAID array]]&lt;br /&gt;
&lt;br /&gt;
== Building a new Mail Server ==&lt;br /&gt;
&lt;br /&gt;
=== Installations ===&lt;br /&gt;
&lt;br /&gt;
I used FreeBSD 11.2 &lt;br /&gt;
&lt;br /&gt;
The order is important especially for the Web Server.&lt;br /&gt;
&lt;br /&gt;
==== Web Server ====&lt;br /&gt;
&lt;br /&gt;
I used FreeBSD 11.2&lt;br /&gt;
  perl 5.26 &lt;br /&gt;
  OpenSSL 1.0.2o-freebsd&lt;br /&gt;
  pcre&lt;br /&gt;
  apache22 &lt;br /&gt;
  mod_perl2&lt;br /&gt;
  PayflowPro&lt;br /&gt;
  mariadb 55 server and client&lt;br /&gt;
&lt;br /&gt;
Installation order is important&lt;br /&gt;
&lt;br /&gt;
install perl 5.26.2 from ports&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ore&amp;gt;&lt;br /&gt;
cd /usr/ports/lang/perl5.26/&lt;br /&gt;
make &lt;br /&gt;
[X] PERL_64BITINT  Use 64 bit integers (on i386)&lt;br /&gt;
[X] USE_PERL       Rewrite links in /usr/bin&lt;br /&gt;
(the rest unchecked&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
install OpenSSL 1.0.2o-freebsd&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
install pcre&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
install Apache22&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/distfiles&lt;br /&gt;
fetch http://archive.apache.org/dist/httpd/httpd-2.2.32.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/www/apache22/tmp&lt;br /&gt;
fetch --no-verify-peer http://mirror.nexcess.net/apache//httpd/httpd-2.2.34.tar.gz&lt;br /&gt;
tar xvzf httpd-2.2.34.tar.gz&lt;br /&gt;
./configure --prefix=/usr/local/apache --with-ssl=/usr/local/openssl/ --enable-ssl --enable-so --with-mpm=prefork --enable-threads --enable-mods-shared=&#039;mime alias setenvif dir&#039; --enable-modules=&#039;mime alias setenvif dir&#039; --with-pcre=/usr/local&lt;br /&gt;
make install&lt;br /&gt;
apachectl restart&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/www/apache22&lt;br /&gt;
echo &amp;quot;DEFAULT_VERSIONS+=apache=2.2&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
make DISABLE_VULNERABILITIES=yes&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
install mod_perl2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/www/mod_perl2&lt;br /&gt;
echo &amp;quot;DEFAULT_VERSIONS+=apache=2.2&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
make DISABLE_VULNERABILITIES=yes&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
install mariadb&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/databases/mariadb-103-server&lt;br /&gt;
echo &amp;quot;DEFAULT_VERSIONS+=apache=2.2&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
make DISABLE_VULNERABILITIES=yes&lt;br /&gt;
make install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Mail Server ====&lt;br /&gt;
&lt;br /&gt;
I used Postfix for email&lt;br /&gt;
&lt;br /&gt;
==== DNS Server (ns1c.johncompanies.com) ====&lt;br /&gt;
&lt;br /&gt;
= ns2c =&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, on lamphost ganeti cloud&lt;br /&gt;
* OS: FreeBSD 11.2 x86_64&lt;br /&gt;
* Networking: Pub IP: 69.55.230.3   Private access: gnt-instance console ns2c.johncompanies.com&lt;br /&gt;
* Hardware: on ganeti cloud gn6.jcihosting.net  secondary gn1.jcihosting.net&lt;br /&gt;
* CPU:  1&lt;br /&gt;
* RAM: 1 GB&lt;br /&gt;
* Drives: 10 GB&lt;br /&gt;
* Remote management:  gnt-instance console ns2c.johncompanies.com&lt;br /&gt;
&lt;br /&gt;
= nat =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This is the main machine to which we ssh and runs all our screen sessions. Further, it&#039;s ip runs in a special block which is not routed through the firewall and this is somewhat immune to DoS attacks which hobble our firewall. Lastly, it acts as a nat server for certain/random devices on the private network.&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 9.1 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.1, Pub IPs: 69.55.233.195, 69.55.233.196, 69.55.233.197, 69.55.233.198, 69.55.233.199. 1 onboard and 1 PCI&lt;br /&gt;
* Hardware: Custom 1U. single power supply. &lt;br /&gt;
* Drives: one 8 GB IDE drive&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* nat&lt;br /&gt;
&lt;br /&gt;
== nat control ==&lt;br /&gt;
All rules are contained in and look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /etc/ipnat.rules&lt;br /&gt;
# www (was 69.55.230.12)&lt;br /&gt;
# virt19&lt;br /&gt;
#bimap fxp0 10.1.4.209/32 -&amp;gt; 69.55.233.198/32&lt;br /&gt;
# virt18&lt;br /&gt;
#bimap fxp0 10.1.4.208/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# virt13&lt;br /&gt;
#bimap fxp0 10.1.4.213/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# virt12&lt;br /&gt;
#bimap fxp0 10.1.4.212/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# virt17&lt;br /&gt;
bimap fxp0 10.1.4.217/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# virt11&lt;br /&gt;
#bimap fxp0 10.1.4.211/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# ASA&lt;br /&gt;
#bimap fxp0 10.1.4.172/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# P1A&lt;br /&gt;
bimap fxp0 10.1.4.240/32 -&amp;gt; 69.55.233.197/32&lt;br /&gt;
#bimap fxp0 10.1.4.238/32 -&amp;gt; 69.55.233.197/32&lt;br /&gt;
# developer (was 69.55.230.17)&lt;br /&gt;
# jail2&lt;br /&gt;
#bimap fxp0 10.1.4.232/32 -&amp;gt; 69.55.233.198/32&lt;br /&gt;
# jail8&lt;br /&gt;
#bimap fxp0 10.1.4.238/32 -&amp;gt; 69.55.233.198/32&lt;br /&gt;
# jail9&lt;br /&gt;
#bimap fxp0 10.1.4.239/32 -&amp;gt; 69.55.233.198/32&lt;br /&gt;
# POLL&lt;br /&gt;
#BIMAP EM0 10.1.6.134/32 -&amp;gt; 69.55.230.20/32&lt;br /&gt;
# 1U SUN&lt;br /&gt;
#BIMAP EM0 10.1.4.4/32 -&amp;gt; 69.55.227.46/32&lt;br /&gt;
# ??&lt;br /&gt;
#BIMAP EM0 10.1.6.3/32 -&amp;gt; 69.55.230.100/32&lt;br /&gt;
# random machine&lt;br /&gt;
#bimap fxp0 10.1.6.13/32 -&amp;gt; 69.55.233.199/32&lt;br /&gt;
#bimap fxp0 10.1.4.232/32 -&amp;gt; 69.55.233.199/32&lt;br /&gt;
# OFFICE OUTBOUND TRAFFIC&lt;br /&gt;
#map fxp0 10.1.6.0/24 -&amp;gt; 0.0.0.0/32 proxy port ftp ftp/tcp&lt;br /&gt;
#map fxp0 10.1.6.0/24 -&amp;gt; 0.0.0.0/32&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A simple entry looks like:&lt;br /&gt;
 bimap fxp0 10.1.4.240/32 -&amp;gt; 69.55.233.197/32&lt;br /&gt;
Which essentially means make private IP 10.1.4.240 reachable on 69.55.233.197 and allow 10.1.4.240 to communicate with the public internet via 69.55.233.197&lt;br /&gt;
&lt;br /&gt;
To reload new rule config:&lt;br /&gt;
 ipnat -C -F -f /etc/ipnat.rules&lt;br /&gt;
&lt;br /&gt;
You may want to setup natting, as above, when you need to reach a DRAC card&#039;s web interface, wherin the DRAC card only has a private IP.&lt;br /&gt;
&lt;br /&gt;
= nat2 =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This is the main machine to which we ssh and runs all our screen sessions at i2b, and runs ns3c (this is kind of the what mail is to castle). Further, it&#039;s ip runs in IP space provided by i2b: 66.181.18.1 - 66.181.18.30, which is not routed through the firewall and this is somewhat immune to DoS attacks which hobble our firewall. Lastly, it acts as a nat server for certain/random devices on the private network.&lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab 6&lt;br /&gt;
* OS: FreeBSD 6.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.2.1, Pub IPs: 69.55.229.2, 69.55.229.3, 66.181.18.4, 66.181.18.5, 66.181.18.6, 66.181.18.7, 66.181.18.8, 66.181.18.9, 66.181.18.10, 66.181.18.11, 66.181.18.12, 66.181.18.13, 66.181.18.14 1 onboard and 1 PCI&lt;br /&gt;
* Hardware: Custom 2U. 6 drive bays, non-hot-swappable. single power supply. &lt;br /&gt;
* Drives: one 150 GB (2 x 150GB) RAID1 array running on a 3ware 8006 RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* nat&lt;br /&gt;
* bigbrother&lt;br /&gt;
* ns3c (jail)&lt;br /&gt;
* ntp&lt;br /&gt;
&lt;br /&gt;
== nat config ==&lt;br /&gt;
Here&#039;s what&#039;s currently nat&#039;d on nat2:&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /etc/ipnat.rules&lt;br /&gt;
# sample entry&lt;br /&gt;
#ATS-9&lt;br /&gt;
bimap em0 10.1.2.79/32 -&amp;gt; 66.181.18.14/32&lt;br /&gt;
#ATS-8&lt;br /&gt;
bimap em0 10.1.2.78/32 -&amp;gt; 66.181.18.13/32&lt;br /&gt;
#ATS-7&lt;br /&gt;
bimap em0 10.1.2.77/32 -&amp;gt; 66.181.18.12/32&lt;br /&gt;
#ATS-6&lt;br /&gt;
bimap em0 10.1.2.76/32 -&amp;gt; 66.181.18.6/32&lt;br /&gt;
#ATS-5&lt;br /&gt;
bimap em0 10.1.2.75/32 -&amp;gt; 66.181.18.7/32&lt;br /&gt;
#ATS-4&lt;br /&gt;
bimap em0 10.1.2.74/32 -&amp;gt; 66.181.18.8/32&lt;br /&gt;
#ATS-3&lt;br /&gt;
bimap em0 10.1.2.73/32 -&amp;gt; 66.181.18.9/32&lt;br /&gt;
#ATS-2&lt;br /&gt;
bimap em0 10.1.2.72/32 -&amp;gt; 66.181.18.10/32&lt;br /&gt;
#ATS-1&lt;br /&gt;
bimap em0 10.1.2.71/32 -&amp;gt; 66.181.18.11/32&lt;br /&gt;
#bwdb2&lt;br /&gt;
bimap em0 10.1.2.4/32 -&amp;gt; 66.181.18.5/32&lt;br /&gt;
&lt;br /&gt;
# spare&lt;br /&gt;
&lt;br /&gt;
map em0 10.1.2.0/24 -&amp;gt; 0.0.0.0/32 proxy port ftp ftp/tcp&lt;br /&gt;
&lt;br /&gt;
#bimap fxp0 10.1.6.49/32 -&amp;gt; 10.1.1.2/32&lt;br /&gt;
#bimap fxp0 10.1.4.208/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== build ==&lt;br /&gt;
&lt;br /&gt;
* partition map:&lt;br /&gt;
&amp;lt;pre&amp;gt;/ 512m&lt;br /&gt;
swap 1G&lt;br /&gt;
/var 256m&lt;br /&gt;
/tmp 256m&lt;br /&gt;
/usr 5g&lt;br /&gt;
/mnt/data1 ~&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* edit /etc/make.conf &lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=nat2 \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* add settings to /boot/loader.conf and /boot.config&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;-Dh&amp;quot; &amp;gt;&amp;gt; /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &#039;console=&amp;quot;comconsole,vidconsole&amp;quot; \&lt;br /&gt;
boot_multicons=&amp;quot;YES&amp;quot; \&lt;br /&gt;
boot_serial=&amp;quot;YES&amp;quot; \&lt;br /&gt;
comconsole_speed=&amp;quot;115200&amp;quot;&#039; &amp;gt;&amp;gt; /boot/loader.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* turn off all ttyv&#039;s except 0 and 1 in /etc/ttys&lt;br /&gt;
also turn on ttyd0, change type to vt100:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ttys&lt;br /&gt;
&lt;br /&gt;
ttyv2   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv3   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv4   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv5   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv6   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv7   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
# Serial terminals&lt;br /&gt;
# The &#039;dialup&#039; keyword identifies dialin lines to login, fingerd etc.&lt;br /&gt;
ttyd0   &amp;quot;/usr/libexec/getty std.9600&amp;quot;   vt100   on  secure&lt;br /&gt;
&lt;br /&gt;
kill -1 1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on console server:&lt;br /&gt;
 vi /etc/remote&lt;br /&gt;
(rename port to jail8 depending on where and which digi plugged into)&lt;br /&gt;
test serial console&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* populate hosts&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.4 bwdb2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* put key in authorized_keys on backup3&lt;br /&gt;
&amp;lt;pre&amp;gt;cd&lt;br /&gt;
ssh-keygen -t dsa -b 1024&amp;lt;/pre&amp;gt; &lt;br /&gt;
(default location, leave password blank)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /root/.ssh/id_dsa.pub | ssh backup3 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039;&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
confirm that you can ssh to backup3 and backup 2 without getting a login prompt&lt;br /&gt;
&lt;br /&gt;
 ssh backup3 hostname&lt;br /&gt;
&lt;br /&gt;
 ssh backup2 hostname&lt;br /&gt;
&lt;br /&gt;
 ssh backup1 hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* edit root&#039;s path and login script:&lt;br /&gt;
 vi /root/.cshrc&lt;br /&gt;
&lt;br /&gt;
Change alias entries (add G):&lt;br /&gt;
&amp;lt;pre&amp;gt;alias la        ls -aG&lt;br /&gt;
alias lf        ls -FAG&lt;br /&gt;
alias ll        ls -lAG&lt;br /&gt;
alias ls        ls -AG&lt;br /&gt;
alias mbm       mb mount&lt;br /&gt;
alias mbu       mb umount&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and alter the prompt, set the following:&lt;br /&gt;
 set prompt = &amp;quot;`/bin/hostname -s` %/# &amp;quot;&lt;br /&gt;
&lt;br /&gt;
* install cvsup&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/net/cvsup-without-gui &lt;br /&gt;
make install clean; rehash; mail -s &#039;cvs installed&#039; dave.boodman@vtext.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* get latest sources for this release:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src &lt;br /&gt;
echo &amp;quot;*default host=cvsup4.freebsd.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_6_4\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
src-all&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup ; mail -s &#039;cvs sup done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* configure new kernel. &lt;br /&gt;
&lt;br /&gt;
 cd /usr/src/sys/i386/conf &lt;br /&gt;
 scp backup2:/mnt/data4/build/freebsd/nat2-6.4 ./nat2&lt;br /&gt;
&lt;br /&gt;
* build, install kernel and world&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /boot&lt;br /&gt;
&lt;br /&gt;
mv kernel kernel.GENERIC&lt;br /&gt;
cd kernel.GENERIC&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make buildkernel installkernel&lt;br /&gt;
&lt;br /&gt;
make buildworld ; mail -s &#039;buildworld done&#039; support@johncompanies.com &amp;lt; /dev/null&lt;br /&gt;
make installworld &lt;br /&gt;
mergemaster -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* populate /etc/rc.conf with IPs and NFS settings&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
hostname=&amp;quot;nat2.johncompanies.com&amp;quot;&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
xntpd_flags=&amp;quot;-A -p /var/run/ntpd.pid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;inet 10.1.6.50 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em0=&amp;quot;inet 69.55.229.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em0_alias0=&amp;quot;inet 69.55.229.229 netmask 255.255.255.255&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 69.55.229.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_fxp0_alias0=&amp;quot;inet 69.55.229.3 netmask 255.255.255.255&amp;quot;&lt;br /&gt;
ifconfig_fxp1=&amp;quot;inet 10.1.2.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
defaultrouter=&amp;quot;10.1.6.1&amp;quot;&lt;br /&gt;
#defaultrouter=&amp;quot; 66.181.14.250&amp;quot;&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ipnat_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ipnat_rules=&amp;quot;/etc/ipnat.rules&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.1&amp;quot;&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* reboot. Confirm new kernel is loaded&lt;br /&gt;
&lt;br /&gt;
 uname -a&lt;br /&gt;
&lt;br /&gt;
* update ports:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports&lt;br /&gt;
echo &amp;quot;*default host=cvsup4.FreeBSD.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_6_4\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
ports-all tag=.&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup; mail -s &#039;cvs sup ports done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Install raid mgmt tool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/local/sbin&lt;br /&gt;
fetch http://3ware.com/download/Escalade9690SA-Series/9.5.3/tw_cli-freebsd-x86-9.5.3.tgz&lt;br /&gt;
tar xzf tw_cli-freebsd-x86-9.5.3.tgz&lt;br /&gt;
rm tw_cli-freebsd-x86-9.5.3.tgz&lt;br /&gt;
chmod 0700 tw_cli&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test:&lt;br /&gt;
 ./tw_cli info c0&lt;br /&gt;
&lt;br /&gt;
* install rsync from ports&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/net/rsync&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
* install perl from ports&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/lang/perl5.8&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* install screen from ports&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/sysutils/screen&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* install bb client&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser&lt;br /&gt;
Username: bb&lt;br /&gt;
Full name: bb&lt;br /&gt;
Uid (Leave empty for default): 1984&lt;br /&gt;
Login group [bb]:&lt;br /&gt;
Login group is bb. Invite bb into other groups? []:&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]: &lt;br /&gt;
Home directory [/home/bb]:&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: yes&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : bb&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : bb&lt;br /&gt;
Uid        : 1984&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : bb&lt;br /&gt;
Home       : /home/bb&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): yes&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb&lt;br /&gt;
scp backup2:/mnt/data4/build/bb/bb-freebsd.tar .&lt;br /&gt;
tar xvf bb-freebsd.tar&amp;lt;/pre&amp;gt;&lt;br /&gt;
edit /home/bb/bbc1.9e-btf/etc/bb-hosts with something like:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
10.1.2.1 nat2.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
vi /home/bb/bbc1.9e-btf/ext/openfiles &lt;br /&gt;
MACHINE=&amp;quot;nat2,johncompanies,com&amp;quot;      # HAS TO BE IN A,B,C FORM&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb/bbc1.9e-btf/etc&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&lt;br /&gt;
make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
&lt;br /&gt;
vi /usr/home/bb/bbc1.9e-btf/bin/bb-cpu.sh&lt;br /&gt;
        $1 $TOPARGS &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
#        /usr/local/jail/bin/jtop &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
echo &#039;su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&#039; &amp;gt; /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
chmod +x /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Punch a hole in the firewall to allow it to communicate with bb monitor (probably already exists):&lt;br /&gt;
&lt;br /&gt;
 ipfw add 96 allow ip from 66.181.18.0/27 to 69.55.230.2&lt;br /&gt;
&lt;br /&gt;
* configure bb on mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /usr/home/bb/bbsrc/bb1.9e-btf/etc/bb-hosts&lt;br /&gt;
64.163.14.48 nat2.johncompanies.com # ssh&lt;br /&gt;
&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
bbsrc/bb/runbb.sh restart ; exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* configure ntp&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;server 69.55.230.2&lt;br /&gt;
server 0.pool.ntp.org&lt;br /&gt;
server 1.pool.ntp.org&lt;br /&gt;
server 2.pool.ntp.org&lt;br /&gt;
server 3.pool.ntp.org&lt;br /&gt;
server 66.187.233.4&lt;br /&gt;
server 217.204.76.170&lt;br /&gt;
server 64.112.189.11&lt;br /&gt;
server 66.69.112.130&lt;br /&gt;
server 80.85.129.25&lt;br /&gt;
server 80.237.234.15&lt;br /&gt;
server 130.60.7.44&lt;br /&gt;
server 134.99.176.3&lt;br /&gt;
server 198.144.202.250&lt;br /&gt;
server 202.74.170.194&lt;br /&gt;
server 204.17.42.199&lt;br /&gt;
server 204.87.183.6&lt;br /&gt;
server 213.15.3.1&lt;br /&gt;
server 213.239.178.33&lt;br /&gt;
server 217.114.97.97&lt;br /&gt;
server 69.55.230.2&amp;quot; &amp;gt; /etc/ntp.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/sbin/ntpd -A -p /var/run/ntpd.pid &lt;br /&gt;
sleep 2; ntpq -p&amp;lt;/pre&amp;gt;&lt;br /&gt;
(confirm it’s able to reach our time server)&lt;br /&gt;
&lt;br /&gt;
 echo &#039;/usr/sbin/ntpd -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift&#039; &amp;gt; /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
 chmod 0700 /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
&lt;br /&gt;
* fwd and reverse lookups on ns1c&lt;br /&gt;
 vr johncompanies.com&lt;br /&gt;
 (edit the PTR too)&lt;br /&gt;
&lt;br /&gt;
* setup backups, nfs mount&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkdir /backup3&lt;br /&gt;
echo &#039;backup3:/data           /backup3        nfs     rw,bg           0       0&#039; &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
echo &#039;#\!/bin/sh\&lt;br /&gt;
backupdir=/data/nat2/current\&lt;br /&gt;
\&lt;br /&gt;
## ENTRY /etc &#039; &amp;gt; /usr/local/etc/backup.config&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on backup3:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
 ssh backup3 mkdir -p /data/nat2/current&lt;br /&gt;
&lt;br /&gt;
on backup3, add the system to &lt;br /&gt;
 vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;scp 69.55.230.2:/usr/local/etc/rsync.backup /usr/local/etc/rsync.backup &lt;br /&gt;
vi /usr/local/etc/rsync.backup &lt;br /&gt;
backup1 &amp;gt; backup3&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 crontab -e&lt;br /&gt;
 1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
&lt;br /&gt;
* edit sshd_config for security&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ssh/sshd_config&lt;br /&gt;
ListenAddress 66.181.18.1&lt;br /&gt;
ListenAddress 69.55.229.2&lt;br /&gt;
ListenAddress 10.1.2.1&lt;br /&gt;
&lt;br /&gt;
kill -1 `cat /var/run/sshd.pid`&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* raid chk&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/local/sbin/lsiraidchk&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
my @out = split &amp;quot;\n&amp;quot;, `megarc -ldInfo -a0 -Lall|grep Status:`;&lt;br /&gt;
&lt;br /&gt;
foreach (@out) {&lt;br /&gt;
    if ($_ =~ /DEGRADED/) { $date = `date`; chomp $date; `echo &amp;quot;$date: RAID ARRAY DEGRADED&amp;quot; &amp;gt;&amp;gt; /var/log/messages`; }&lt;br /&gt;
#print $_;&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* netflow stuff&lt;br /&gt;
add crontab entries&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl; &lt;br /&gt;
*/5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
&lt;br /&gt;
#10 0 * * * rm /var/spool/clientmqueue/*&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;scp /etc/makefwrules.pl user@64.163.14.48:~&lt;br /&gt;
scp /etc/makepiperules.pl user@64.163.14.48:~&lt;br /&gt;
mv /home/user/makefwrules.pl /etc&lt;br /&gt;
mv /home/user/makepiperules.pl /etc&lt;br /&gt;
touch /etc/firewall.sh&lt;br /&gt;
mkdir /etc/oldrules/&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
other binaries&lt;br /&gt;
	&lt;br /&gt;
&amp;lt;pre&amp;gt;scp /usr/local/bin/rulemaker user@64.163.14.48:~&lt;br /&gt;
mv ~user/rulemaker /usr/local/sbin&lt;br /&gt;
scp ~user/Sendmail.pm user@64.163.14.48:~&lt;br /&gt;
scp ~user/doswatch.pl user@64.163.14.48:~&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* add nat rules&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ipnat.rules&lt;br /&gt;
# sample entry&lt;br /&gt;
bimap fxp0 10.1.6.70/32 -&amp;gt; 10.1.6.59/32&lt;br /&gt;
#bimap fxp0 10.1.4.208/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
&lt;br /&gt;
ipnat -C -f /etc/ipnat.rules&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* shell for user&lt;br /&gt;
&amp;lt;pre&amp;gt;cp /root/.cshrc ~user/&lt;br /&gt;
vi ~user/&amp;lt;/pre&amp;gt;&lt;br /&gt;
change # to $&lt;br /&gt;
&lt;br /&gt;
* mrtg&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/net-mgmt/mrtg&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
(no FONTCONFIG, v3)&lt;br /&gt;
&lt;br /&gt;
this didn&#039;t work cause of libtool incompat&lt;br /&gt;
&lt;br /&gt;
so manually moved files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;scp /usr/local/bin/cfgmaker user@nat2:/usr/local/bin/cfgmaker&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/MRTG_lib.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/SNMP_util.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/BER.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/SNMP_Session.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/&lt;br /&gt;
scp /usr/local/bin/mrtg root@nat2:/usr/local/bin/mrtg&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/locales_mrtg.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/locales_mrtg.pm&lt;br /&gt;
scp /usr/local/bin/rrdtool root@nat2:/usr/local/bin/rrdtool&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/mach/RRDs.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/mach/RRDs.pm&lt;br /&gt;
rsync -av /usr/local/lib/perl5/site_perl/5.6.1/mach/auto/RRDs/ root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/mach/auto/RRDs/&lt;br /&gt;
scp /usr/lib/libz.so.2 root@nat2:/usr/lib/libz.so.2&lt;br /&gt;
scp /usr/lib/libm.so.2 root@nat2:/usr/lib/libm.so.2&lt;br /&gt;
rsync -av /usr/local/lib/librrd* root@nat2:/usr/local/lib/&lt;br /&gt;
scp /usr/lib/libc.so.4 root@nat2:/usr/lib/libc.so.4&lt;br /&gt;
&lt;br /&gt;
rsync -av /usr/ports/net/rrdtool root@nat2:/usr/ports/net&lt;br /&gt;
cd /usr/ports/net/rrdtool&lt;br /&gt;
make install&lt;br /&gt;
&lt;br /&gt;
mkdir -p /mnt/data1/mrtg/data&lt;br /&gt;
scp /usr/local/www/mgmt/mrtg/template.pl root@nat2:/mnt/data1/mrtg/&lt;br /&gt;
scp /usr/local/www/mgmt/mrtg/host.pl root@nat2:/mnt/data1/mrtg/&lt;br /&gt;
&lt;br /&gt;
cfgmaker --if-template=template.pl --show-op-down --global &amp;quot;options[_]: growright,bits&amp;quot; --global &#039;WorkDir: /mnt/data1/mrtg/data&#039; --global &#039;Interval: 1&#039; --global &#039;LogFormat: rrdtool&#039; --global &#039;PathAdd: /usr/local/bin&#039; --global &#039;LibAdd: /usr/local/lib&#039; --host-template=host.pl jc292401@10.1.2.50 --output=switch-p20.cfg&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /mnt/data1/mrtg/mrtg.sh&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
/usr/local/bin/mrtg /mnt/data1/mrtg/switch-p20.cfg&lt;br /&gt;
&lt;br /&gt;
chmod 0700 /mnt/data1/mrtg/mrtg.sh&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
* * * * * /mnt/data1/mrtg/mrtg.sh 2&amp;gt;&amp;amp;1 &amp;gt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* snmp firewall block&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/local/etc/rc.d/boot.sh&lt;br /&gt;
ipfw add 10 allow udp from 69.55.230.2 to any 161&lt;br /&gt;
ipfw add 10 allow udp from 10.1.2.1 to any 161&lt;br /&gt;
ipfw add 11 deny udp from any to any 161&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/boot.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= bwdb =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine tracks and stores network traffic (netflow) at castle. It is our means to monitor customer bandwidth usage.&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 4.10 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.203 There are 2 onboard nic&#039;s, one of which is the &amp;quot;listener&amp;quot;&lt;br /&gt;
* Hardware: Custom 1U. Single power supply. &lt;br /&gt;
* Drives: one 250 GB (2 x 250GB) RAID1 array running on a Promise IDE RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* netflow&lt;br /&gt;
* mysql&lt;br /&gt;
* bigbrother &lt;br /&gt;
* snmp&lt;br /&gt;
&lt;br /&gt;
== netflow ==&lt;br /&gt;
&lt;br /&gt;
The main function of this server is to run netflow on an eth device in promiscuous mode so as to hear everything happening on the port (wherein all network traffic is mirrored to that port via the cisco swith). Every 15min, it creates a flow file under &amp;lt;tt&amp;gt;/usr/home/flows/&amp;lt;/tt&amp;gt; (organized by date). The flow file contains all traffic data for a 15min increment of time.&lt;br /&gt;
&lt;br /&gt;
A cronjob moves that flow file (or files if there are multiple due to some delay) &lt;br /&gt;
 1,16,31,46 * * * * /usr/home/flowbin/queue.pl&lt;br /&gt;
&lt;br /&gt;
into a processing queue:&lt;br /&gt;
&amp;lt;tt&amp;gt;/usr/home/working&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then a separate file processes whatever flow files it finds there, inserting the data into the local mysql database:&lt;br /&gt;
 2,17,32,47 * * * * /usr/home/flowbin/processflows.pl&lt;br /&gt;
&lt;br /&gt;
== mysql ==&lt;br /&gt;
&lt;br /&gt;
The database storing all the traffic data is named &amp;lt;tt&amp;gt;traffic&amp;lt;/tt&amp;gt;&lt;br /&gt;
Tables:&lt;br /&gt;
&amp;lt;pre&amp;gt;mysql&amp;gt; show tables;&lt;br /&gt;
+---------------------------+&lt;br /&gt;
| Tables_in_traffic         |&lt;br /&gt;
+---------------------------+&lt;br /&gt;
| dailyIpTotals_69_55_224   |&lt;br /&gt;
| dailyIpTotals_69_55_225   |&lt;br /&gt;
| dailyIpTotals_69_55_226   |&lt;br /&gt;
| dailyIpTotals_69_55_227   |&lt;br /&gt;
| dailyIpTotals_69_55_228   |&lt;br /&gt;
| dailyIpTotals_69_55_229   |&lt;br /&gt;
| dailyIpTotals_69_55_230   |&lt;br /&gt;
| dailyIpTotals_69_55_231   |&lt;br /&gt;
| dailyIpTotals_69_55_232   |&lt;br /&gt;
| dailyIpTotals_69_55_233   |&lt;br /&gt;
| dailyIpTotals_69_55_234   |&lt;br /&gt;
| dailyIpTotals_69_55_235   |&lt;br /&gt;
| dailyIpTotals_69_55_236   |&lt;br /&gt;
| dailyIpTotals_69_55_237   |&lt;br /&gt;
| dailyIpTotals_69_55_238   |&lt;br /&gt;
| dailyIpTotals_69_55_239   |&lt;br /&gt;
| dailyPortTotals_69_55_224 |&lt;br /&gt;
| dailyPortTotals_69_55_225 |&lt;br /&gt;
| dailyPortTotals_69_55_226 |&lt;br /&gt;
| dailyPortTotals_69_55_227 |&lt;br /&gt;
| dailyPortTotals_69_55_228 |&lt;br /&gt;
| dailyPortTotals_69_55_229 |&lt;br /&gt;
| dailyPortTotals_69_55_230 |&lt;br /&gt;
| dailyPortTotals_69_55_231 |&lt;br /&gt;
| dailyPortTotals_69_55_232 |&lt;br /&gt;
| dailyPortTotals_69_55_233 |&lt;br /&gt;
| dailyPortTotals_69_55_234 |&lt;br /&gt;
| dailyPortTotals_69_55_235 |&lt;br /&gt;
| dailyPortTotals_69_55_236 |&lt;br /&gt;
| dailyPortTotals_69_55_237 |&lt;br /&gt;
| dailyPortTotals_69_55_238 |&lt;br /&gt;
| dailyPortTotals_69_55_239 |&lt;br /&gt;
| ipTotals_69_55_224        |&lt;br /&gt;
| ipTotals_69_55_225        |&lt;br /&gt;
| ipTotals_69_55_226        |&lt;br /&gt;
| ipTotals_69_55_227        |&lt;br /&gt;
| ipTotals_69_55_228        |&lt;br /&gt;
| ipTotals_69_55_229        |&lt;br /&gt;
| ipTotals_69_55_230        |&lt;br /&gt;
| ipTotals_69_55_231        |&lt;br /&gt;
| ipTotals_69_55_232        |&lt;br /&gt;
| ipTotals_69_55_233        |&lt;br /&gt;
| ipTotals_69_55_234        |&lt;br /&gt;
| ipTotals_69_55_235        |&lt;br /&gt;
| ipTotals_69_55_236        |&lt;br /&gt;
| ipTotals_69_55_237        |&lt;br /&gt;
| ipTotals_69_55_238        |&lt;br /&gt;
| ipTotals_69_55_239        |&lt;br /&gt;
| portTotals_69_55_224      |&lt;br /&gt;
| portTotals_69_55_225      |&lt;br /&gt;
| portTotals_69_55_226      |&lt;br /&gt;
| portTotals_69_55_227      |&lt;br /&gt;
| portTotals_69_55_228      |&lt;br /&gt;
| portTotals_69_55_229      |&lt;br /&gt;
| portTotals_69_55_230      |&lt;br /&gt;
| portTotals_69_55_231      |&lt;br /&gt;
| portTotals_69_55_232      |&lt;br /&gt;
| portTotals_69_55_233      |&lt;br /&gt;
| portTotals_69_55_234      |&lt;br /&gt;
| portTotals_69_55_235      |&lt;br /&gt;
| portTotals_69_55_236      |&lt;br /&gt;
| portTotals_69_55_237      |&lt;br /&gt;
| portTotals_69_55_238      |&lt;br /&gt;
| portTotals_69_55_239      |&lt;br /&gt;
+---------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So as you see we store each class-C block in its own table, for efficiency. Further, we store and organize data in 4 ways: &amp;quot;daily&amp;quot; tables and 15-minute granularity tables, and for each of those we track simple IP traffic and port-specific traffic. The daily tables contains 2 entries (one for each direction) for each IP for each day. For the current day, the row data is incremented as the day goes on. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mysql&amp;gt; describe dailyIpTotals_69_55_224;&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
| Field     | Type        | Null | Key | Default | Extra |&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
| id        | varchar(23) |      | PRI |         |       |&lt;br /&gt;
| date      | date        | YES  |     | NULL    |       |&lt;br /&gt;
| ip        | varchar(15) | YES  | MUL | NULL    |       |&lt;br /&gt;
| direction | tinyint(1)  | YES  |     | NULL    |       |&lt;br /&gt;
| octets    | bigint(12)  | YES  |     | NULL    |       |&lt;br /&gt;
| packets   | int(11)     | YES  |     | NULL    |       |&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from dailyIpTotals_69_55_224 limit 1\G&lt;br /&gt;
*************************** 1. row ***************************&lt;br /&gt;
       id: 6955224194-20100917-1&lt;br /&gt;
     date: 2010-09-17&lt;br /&gt;
       ip: 69.55.224.194&lt;br /&gt;
direction: 1&lt;br /&gt;
   octets: 8821&lt;br /&gt;
  packets: 91&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;id&amp;lt;/tt&amp;gt; is a unique identifier (key), &amp;lt;tt&amp;gt;direction&amp;lt;/tt&amp;gt; indicates incoming or outgoing traffic (outbound = 2, inbound = 1), &amp;lt;tt&amp;gt;octets&amp;lt;/tt&amp;gt; are the amount of traffic in kilobytes, and &amp;lt;tt&amp;gt;packets&amp;lt;/tt&amp;gt; is the total number of packets.&lt;br /&gt;
&lt;br /&gt;
The 15-minute table has similar information, but it&#039;s organized in 15 minute increments:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mysql&amp;gt; describe ipTotals_69_55_224;&lt;br /&gt;
+-----------+------------+------+-----+---------+-------+&lt;br /&gt;
| Field     | Type       | Null | Key | Default | Extra |&lt;br /&gt;
+-----------+------------+------+-----+---------+-------+&lt;br /&gt;
| date      | datetime   | YES  |     | NULL    |       |&lt;br /&gt;
| ip        | char(15)   | YES  | MUL | NULL    |       |&lt;br /&gt;
| direction | tinyint(1) | YES  |     | NULL    |       |&lt;br /&gt;
| octets    | bigint(20) | YES  |     | NULL    |       |&lt;br /&gt;
| packets   | int(11)    | YES  |     | NULL    |       |&lt;br /&gt;
+-----------+------------+------+-----+---------+-------+&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from ipTotals_69_55_224 limit 2\G&lt;br /&gt;
*************************** 1. row ***************************&lt;br /&gt;
     date: 2010-01-11 19:30:00&lt;br /&gt;
       ip: 69.55.224.13&lt;br /&gt;
direction: 1&lt;br /&gt;
   octets: 288&lt;br /&gt;
  packets: 6&lt;br /&gt;
*************************** 2. row ***************************&lt;br /&gt;
     date: 2010-01-11 19:30:00&lt;br /&gt;
       ip: 69.55.224.12&lt;br /&gt;
direction: 1&lt;br /&gt;
   octets: 216&lt;br /&gt;
  packets: 4&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So for a given IP, there will be 192 rows in a given day: 4 rows per hour, *2 for 2 directions, *24 for 24hours in a day. Obviously this table is large which is why we broke it down into a daily table for quick, easy, daily-summary access.&lt;br /&gt;
&lt;br /&gt;
That covers the simple traffic tabulation tables. We also track traffic by port:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mysql&amp;gt; describe dailyPortTotals_69_55_224;&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
| Field     | Type        | Null | Key | Default | Extra |&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
| id        | varchar(28) |      | PRI |         |       |&lt;br /&gt;
| date      | date        | YES  |     | NULL    |       |&lt;br /&gt;
| ip        | varchar(15) | YES  | MUL | NULL    |       |&lt;br /&gt;
| direction | tinyint(1)  | YES  |     | NULL    |       |&lt;br /&gt;
| protocol  | smallint(3) | YES  |     | NULL    |       |&lt;br /&gt;
| port      | int(11)     | YES  |     | NULL    |       |&lt;br /&gt;
| octets    | bigint(11)  | YES  |     | NULL    |       |&lt;br /&gt;
| packets   | int(11)     | YES  |     | NULL    |       |&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
8 rows in set (0.00 sec)&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from dailyPortTotals_69_55_224 limit 1\G&lt;br /&gt;
*************************** 1. row ***************************&lt;br /&gt;
       id: 695522496-20091218-1-6-23&lt;br /&gt;
     date: 2009-12-18&lt;br /&gt;
       ip: 69.55.224.96&lt;br /&gt;
direction: 1&lt;br /&gt;
 protocol: 6&lt;br /&gt;
     port: 23&lt;br /&gt;
   octets: 1796&lt;br /&gt;
  packets: 30&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from portTotals_69_55_224 limit 1\G&lt;br /&gt;
*************************** 1. row ***************************&lt;br /&gt;
     date: 2010-09-07 18:45:00&lt;br /&gt;
       ip: 69.55.224.254&lt;br /&gt;
direction: 1&lt;br /&gt;
 protocol: 6&lt;br /&gt;
     port: 99999&lt;br /&gt;
   octets: 144&lt;br /&gt;
  packets: 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is largely the same with 2 more additions: &amp;lt;tt&amp;gt;protocol&amp;lt;/tt&amp;gt; (1=ICMP, 6=TCP, 17=UDP), and &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt; which we set to 99999 if the traffic is return traffic and the port is above 1024. Obviously the potential for number of rows grows quickly when you consider the addition of port and protocol tracking per IP.&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Adaptec_Controllers|Check RAID array]]&lt;br /&gt;
* archive data from database&lt;br /&gt;
 archive_daily.pl 2012 09&lt;br /&gt;
This will archive data for the given year and month from the daily summary tables. Generally we want to have a year of history in the database.&lt;br /&gt;
&lt;br /&gt;
 archive_15min.pl 2012 09&lt;br /&gt;
This will archive data for the given year and month from the 15min-increment tables. Generally, we want to have 6 months of history in the database.&lt;br /&gt;
&lt;br /&gt;
* if space becomes tight, move flow files and exported data to a backup server, both located in &amp;lt;tt&amp;gt;/usr/home/flowbin/archive&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/usr/home/exported&amp;lt;/tt&amp;gt;, respectively&lt;br /&gt;
&lt;br /&gt;
== Slaving ==&lt;br /&gt;
&lt;br /&gt;
If we were going to setup traffic database slaving (we don&#039;t do this anymore), perhaps cause the bwdb machine gets busy and it cannot handle traffic requests and netflow, here&#039;s how it&#039;s done:&lt;br /&gt;
&lt;br /&gt;
On the traffic master:&lt;br /&gt;
&lt;br /&gt;
 GRANT REPLICATION SLAVE ON *.* TO &#039;repl&#039;@&#039;10.1.4.8&#039; IDENTIFIED BY &#039;qERUG8wf&#039;;&lt;br /&gt;
&lt;br /&gt;
in my.cnf:&lt;br /&gt;
&amp;lt;pre&amp;gt;bin-log&lt;br /&gt;
server-id=1&lt;br /&gt;
max_binlog_size=500M&lt;br /&gt;
expire_logs_days = 3&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on slave:&lt;br /&gt;
in my.cnf:&lt;br /&gt;
&amp;lt;pre&amp;gt;server-id       = 2&lt;br /&gt;
master-host     =   10.1.4.203&lt;br /&gt;
master-user     =   repl&lt;br /&gt;
master-password =   qERUG8wf&lt;br /&gt;
master-connect-retry=60&lt;br /&gt;
replicate-wild-do-table=traffic.daily%&lt;br /&gt;
max_relay_log_size=500M&lt;br /&gt;
expire_logs_days = 3&lt;br /&gt;
&lt;br /&gt;
replicate-wild-do-table=traffic.%&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on master:&lt;br /&gt;
 touch /usr/home/working/.lock&lt;br /&gt;
(make sure processflows not running)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;FLUSH TABLES WITH READ LOCK;&lt;br /&gt;
cd /usr/home/database/traffic&lt;br /&gt;
tar -czf mysql-traffic-snapshot.tgz ./daily*&lt;br /&gt;
(~1G)&lt;br /&gt;
SHOW MASTER STATUS;&lt;br /&gt;
+-----------------+-----------+--------------+------------------+&lt;br /&gt;
| File            | Position  | Binlog_Do_DB | Binlog_Ignore_DB |&lt;br /&gt;
+-----------------+-----------+--------------+------------------+&lt;br /&gt;
| bwdb-bin.000039 | 154432615 |              |                  |&lt;br /&gt;
+-----------------+-----------+--------------+------------------+&lt;br /&gt;
&lt;br /&gt;
(write down info)&lt;br /&gt;
UNLOCK TABLES;&lt;br /&gt;
scp mysql-traffic-snapshot.tgz 10.1.4.5:/mnt/data1/db/mysql/traffic/&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on slave:&lt;br /&gt;
&amp;lt;pre&amp;gt;mkdir /mnt/data1/db/mysql/traffic&lt;br /&gt;
cd /mnt/data1/db/mysql/traffic/&lt;br /&gt;
tar xzvf mysql-traffic-snapshot.tgz&lt;br /&gt;
(restart mysql)&lt;br /&gt;
CHANGE MASTER TO MASTER_HOST=&#039;10.1.4.203&#039;,MASTER_USER=&#039;repl&#039;,MASTER_PASSWORD=&#039;qERUG8wf&#039;,MASTER_LOG_FILE=&#039;bwdb-bin.000059&#039;,MASTER_LOG_POS=482502186;&lt;br /&gt;
START SLAVE;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/home/database/traffic&lt;br /&gt;
scp *&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;optimize table dailyPortTotals_69_55_224;&lt;br /&gt;
optimize table dailyPortTotals_69_55_225;&lt;br /&gt;
optimize table dailyPortTotals_69_55_226;&lt;br /&gt;
optimize table dailyPortTotals_69_55_227;&lt;br /&gt;
optimize table dailyPortTotals_69_55_228;&lt;br /&gt;
optimize table dailyPortTotals_69_55_229;&lt;br /&gt;
optimize table dailyPortTotals_69_55_230;&lt;br /&gt;
optimize table dailyPortTotals_69_55_231;&lt;br /&gt;
optimize table dailyPortTotals_69_55_232;&lt;br /&gt;
optimize table dailyPortTotals_69_55_233;&lt;br /&gt;
optimize table dailyPortTotals_69_55_234;&lt;br /&gt;
optimize table dailyPortTotals_69_55_235;&lt;br /&gt;
optimize table dailyPortTotals_69_55_236;&lt;br /&gt;
optimize table dailyPortTotals_69_55_237;&lt;br /&gt;
optimize table dailyPortTotals_69_55_238;&lt;br /&gt;
optimize table dailyPortTotals_69_55_239;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&lt;br /&gt;
=== BIOS Config ===&lt;br /&gt;
disable quiet boot&lt;br /&gt;
&lt;br /&gt;
set to last state after power loss&lt;br /&gt;
&lt;br /&gt;
set date/time to GMT&lt;br /&gt;
&lt;br /&gt;
enable serial console output (baud rate 115200)&lt;br /&gt;
&lt;br /&gt;
=== Install OS ===&lt;br /&gt;
&lt;br /&gt;
Install FreeBSD 8.3 amd64&lt;br /&gt;
&lt;br /&gt;
* partition map:&lt;br /&gt;
&amp;lt;pre&amp;gt;/ 500m&lt;br /&gt;
swap 4096m&lt;br /&gt;
/var 256m&lt;br /&gt;
/tmp 256m&lt;br /&gt;
/usr ~&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* edit /etc/make.conf &lt;br /&gt;
Castle:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=bwdb \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
i2b:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=bwdb2 \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* add settings to /boot/loader.conf and /boot.config&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;-Dh&amp;quot; &amp;gt;&amp;gt; /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &#039;console=&amp;quot;comconsole,vidconsole&amp;quot; \&lt;br /&gt;
boot_multicons=&amp;quot;YES&amp;quot; \&lt;br /&gt;
boot_serial=&amp;quot;YES&amp;quot; \&lt;br /&gt;
comconsole_speed=&amp;quot;115200&amp;quot;&#039; &amp;gt;&amp;gt; /boot/loader.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* turn off all ttyv&#039;s except 0 and 1 in /etc/ttys&lt;br /&gt;
also turn on ttyu0, change type to vt100:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ttys&lt;br /&gt;
&lt;br /&gt;
ttyv2   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv3   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv4   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv5   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv6   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv7   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
# Serial terminals&lt;br /&gt;
# The &#039;dialup&#039; keyword identifies dialin lines to login, fingerd etc.&lt;br /&gt;
ttyu0   &amp;quot;/usr/libexec/getty std.9600&amp;quot;   vt100   on secure&lt;br /&gt;
&lt;br /&gt;
kill -1 1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on console server:&lt;br /&gt;
 vi /etc/remote&lt;br /&gt;
(rename port to jail8 depending on where and which digi plugged into)&lt;br /&gt;
test serial console&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* populate hosts&lt;br /&gt;
i2b:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
castle:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;10.1.4.3 backup2 backup2.johncompanies.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.4.8 backup1 backup1.johncompanies.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.4.4 mail mail.johncompanies.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* put key in authorized_keys on backup1 and  backup2&lt;br /&gt;
 cd&lt;br /&gt;
 ssh-keygen -t dsa -b 1024 &lt;br /&gt;
(default location, leave password blank)&lt;br /&gt;
&lt;br /&gt;
castle:&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
i2b:&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup3 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
confirm that you can ssh to backup2 and backup1 (and backup3 if at i2b) without getting a login prompt&lt;br /&gt;
&lt;br /&gt;
 ssh backup1 hostname&lt;br /&gt;
 ssh backup2 hostname&lt;br /&gt;
&lt;br /&gt;
* edit root&#039;s path and login script:&lt;br /&gt;
 vi /root/.cshrc&lt;br /&gt;
&lt;br /&gt;
Change alias entries (add G):&lt;br /&gt;
&amp;lt;pre&amp;gt;alias la        ls -aG&lt;br /&gt;
alias lf        ls -FAG&lt;br /&gt;
alias ll        ls -lAG&lt;br /&gt;
alias ls        ls -AG&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and alter the prompt, set the following:&lt;br /&gt;
 set prompt = &amp;quot;`/bin/hostname -s` %/# &amp;quot;&lt;br /&gt;
&lt;br /&gt;
* install cvsup&lt;br /&gt;
 cd /usr/ports/net/cvsup-without-gui &lt;br /&gt;
 make install clean; rehash; mail -s &#039;cvs installed&#039; support@johncompanies.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
* get latest sources for this release:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src &lt;br /&gt;
echo &amp;quot;*default host=cvsup4.freebsd.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_8_3\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
src-all&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup ; mail -s &#039;cvs sup done&#039; support@johncompanies.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* configure new kernel&lt;br /&gt;
&lt;br /&gt;
 cd /usr/src/sys/amd64/conf &lt;br /&gt;
 scp backup2:/mnt/data4/build/freebsd/kern_config-bwdb-8.3-amd64 ./bwdb&lt;br /&gt;
&lt;br /&gt;
Edit config and change name:&lt;br /&gt;
 vi bwdb&lt;br /&gt;
 ident  bwdb&lt;br /&gt;
&lt;br /&gt;
* build, install kernel and world&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /boot&lt;br /&gt;
&lt;br /&gt;
mv kernel kernel.GENERIC&lt;br /&gt;
cd kernel.GENERIC&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make buildkernel installkernel&lt;br /&gt;
&lt;br /&gt;
make buildworld ; mail -s &#039;buildworld done&#039; support@johncompanies.com &amp;lt; /dev/null&lt;br /&gt;
(2450: 1:56min, supermicro: 59mins, 2950: 38mins)&lt;br /&gt;
make installworld &lt;br /&gt;
(2450: 3min, supermicro: 1min, 2950: :34)&lt;br /&gt;
mergemaster -i&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* populate /etc/rc.conf with IPs and NFS settings&lt;br /&gt;
&lt;br /&gt;
castle:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
hostname=&amp;quot;bwdb.johncompanies.com&amp;quot;&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.4.203 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;up promisc&amp;quot;&lt;br /&gt;
defaultrouter=&amp;quot;10.1.4.1&amp;quot;&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.4.203&amp;quot;&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ipfw_load=&amp;quot;YES&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
i2b:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
hostname=&amp;quot;bwdb2.johncompanies.com&amp;quot;&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.4 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;up promisc&amp;quot;&lt;br /&gt;
defaultrouter=&amp;quot;10.1.2.1&amp;quot;&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.4&amp;quot;&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ipfw_load=&amp;quot;YES&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* reboot. Confirm new kernel is loaded&lt;br /&gt;
&lt;br /&gt;
 uname -a&lt;br /&gt;
&lt;br /&gt;
* update ports:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports&lt;br /&gt;
echo &amp;quot;*default host=cvsup4.FreeBSD.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_8_3\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
ports-all tag=.&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup; mail -s &#039;cvs sup ports done&#039; support@johncompanies.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Install raid mgmt tool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/local/sbin&lt;br /&gt;
scp backup2:/d4/build/3ware/tw_cli-freebsd-x86_64-9.5.0.1.tgz .&lt;br /&gt;
tar xzf tw_cli-freebsd-x86_64-9.5.0.1.tgz&lt;br /&gt;
rm tw_cli-freebsd-x86_64-9.5.0.1.tgz&lt;br /&gt;
chmod 0700 tw_cli&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test:&lt;br /&gt;
 ./tw_cli info c0&lt;br /&gt;
&lt;br /&gt;
Grab raid check script:&lt;br /&gt;
 scp backup1:/usr/local/sbin/3wraidchk /usr/local/etc&lt;br /&gt;
&lt;br /&gt;
Setup cronjob:&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
*/5 * * * * /usr/local/etc/3wraidchk&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* install rsync from ports&lt;br /&gt;
 cd /usr/ports/net/rsync&lt;br /&gt;
 make install clean&lt;br /&gt;
&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
* install perl from ports&lt;br /&gt;
 cd /usr/ports/lang/perl5.8&lt;br /&gt;
 make install clean&lt;br /&gt;
&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
* install bb client&lt;br /&gt;
&lt;br /&gt;
Compiling from source on AMD64 will not work. So, we use a linux-compiled version and rely on linux compat. Linux compat won&#039;t install on 8.x - libtool 2.4 need. So, instead we copy(ed) over linux: &lt;br /&gt;
 rsync -aSHv --exclude=proc --exclude=sys 10.1.4.108:/usr/compat/linux/ /usr/compat/linux/&lt;br /&gt;
&lt;br /&gt;
 adduser&lt;br /&gt;
&lt;br /&gt;
Output/response:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Username: bb&lt;br /&gt;
Full name: bb&lt;br /&gt;
Uid (Leave empty for default): 1984&lt;br /&gt;
Login group [bb]:&lt;br /&gt;
Login group is bb. Invite bb into other groups? []:&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]: &lt;br /&gt;
Home directory [/home/bb]:&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: yes&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : bb&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : bb&lt;br /&gt;
Uid        : 1984&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : bb&lt;br /&gt;
Home       : /home/bb&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): yes&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 cd /usr/home/bb&lt;br /&gt;
 scp backup2:/mnt/data4/build/bb/bb-freebsd_linuxcompat.tgz .&lt;br /&gt;
 tar xzf bb-freebsd_linuxcompat.tgz&lt;br /&gt;
&lt;br /&gt;
edit /home/bb/bbc1.9e-btf/etc/bb-hosts with something like:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;10.1.4.5 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
 10.1.4.203 bwdb.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
Edit for machine name and private IP.&lt;br /&gt;
&lt;br /&gt;
if this machine is at i2b:&lt;br /&gt;
 echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
 10.1.2.4 bwdb2.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /home/bb/bbc1.9e-btf/ext/openfiles &lt;br /&gt;
&lt;br /&gt;
MACHINE=&amp;quot;bwdb,johncompanies,com&amp;quot;      # HAS TO BE IN A,B,C FORM&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit for machine name.&lt;br /&gt;
&lt;br /&gt;
Have bb watch for flow-capture, mysql&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt;&amp;gt; /home/bb/bbc1.9e-btf/etc/bb-proctab&lt;br /&gt;
localhost: flow-capture :&lt;br /&gt;
localhost: mysqld :&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/home/bb/bbc1.9e-btf/etc&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put in script to start bb @ boot:&lt;br /&gt;
 echo &#039;su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&#039; &amp;gt; /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
 chmod +x /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If this is at i2b, punch a hole in the firewall to allow it to communicate with bb monitor:&lt;br /&gt;
&lt;br /&gt;
ipfw add 00096 allow tcp from 66.181.18.0/27 to 69.55.230.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* configure bb on mail&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /usr/home/bb/bbsrc/bb1.9e-btf/etc/bb-hosts&lt;br /&gt;
10.1.4.203 bwdb.johncompanies.com # ssh&lt;br /&gt;
&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
bbsrc/bb/runbb.sh restart ; exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* configure ntp server&lt;br /&gt;
Castle:&lt;br /&gt;
 echo &amp;quot;server 10.1.4.1&amp;quot; &amp;gt; /etc/ntp.conf&lt;br /&gt;
&lt;br /&gt;
I2b:&lt;br /&gt;
 echo &amp;quot;server 10.1.2.1&amp;quot; &amp;gt; /etc/ntp.conf&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/sbin/ntpd -p /var/run/ntpd.pid&lt;br /&gt;
sleep 2; ntpq -p&amp;lt;/pre&amp;gt;&lt;br /&gt;
(confirm it’s able to reach our time server)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;/usr/sbin/ntpd -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift&#039; &amp;gt; /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/ntp.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* fwd and reverse lookups on ns1c&lt;br /&gt;
vr johncompanies.com&lt;br /&gt;
 (edit the PTR too)&lt;br /&gt;
&lt;br /&gt;
* setup backups&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;#\!/bin/sh\&lt;br /&gt;
backupdir=/data/bwdb/current\&lt;br /&gt;
server=backup1\&lt;br /&gt;
\&lt;br /&gt;
## ENTRY /etc\&lt;br /&gt;
## ENTRY /usr/home/flowbin\&lt;br /&gt;
## ENTRY /usr/home/database&#039; &amp;gt; /usr/local/etc/backup.config&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Castle:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
 ssh backup1 mkdir -p /data/bwdb/current&lt;br /&gt;
on backup1, add the system to &lt;br /&gt;
 vi /usr/local/sbin/snapshot_rotate&lt;br /&gt;
&lt;br /&gt;
I2b:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
 ssh backup3 mkdir -p /data/bwdb/current&lt;br /&gt;
on backup3, add the system to &lt;br /&gt;
 vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Copy over the backup script:&lt;br /&gt;
 scp backup2:/d4/bin/freebsd8.x/rsync.backup /usr/local/etc/&lt;br /&gt;
&lt;br /&gt;
Edit rsync.backup and change &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; var to point to correct config file location: &amp;lt;tt&amp;gt;/usr/local/etc/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
5 0 * * * /usr/local/etc/rsync.backup&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* make /root/logs&lt;br /&gt;
 mkdir /root/logs&lt;br /&gt;
&lt;br /&gt;
* edit sshd_config for security&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ssh/sshd_config&lt;br /&gt;
ListenAddress 10.1.4.203&lt;br /&gt;
PermitRootLogin yes&lt;br /&gt;
&lt;br /&gt;
kill -1 `cat /var/run/sshd.pid`&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Edit for private IP.&lt;br /&gt;
&lt;br /&gt;
* snmp&lt;br /&gt;
&lt;br /&gt;
(Before doing this you may need to take down the firewall and also add to resolv.conf 69.43.143.41)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/net-mgmt/net-snmp&lt;br /&gt;
make install clean&lt;br /&gt;
(defaults)&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/rc.conf&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
snmpd_flags=&amp;quot;-a&amp;quot;&lt;br /&gt;
snmpd_conffile=&amp;quot;/usr/local/share/snmp/snmpd.conf&amp;quot;&lt;br /&gt;
snmptrapd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
snmptrapd_flags=&amp;quot;-a -p /var/run/snmptrapd.pid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/share/snmp/snmpd.conf&lt;br /&gt;
rocommunity  jcread 10.1.4.5&lt;br /&gt;
rocommunity  jcread 10.1.4.202&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== netflow ===&lt;br /&gt;
&lt;br /&gt;
Install flow tools:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/net-mgmt/flow-tools&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
Defaults.&lt;br /&gt;
&lt;br /&gt;
 mkdir /usr/home/flows&lt;br /&gt;
&lt;br /&gt;
Flow start script:&lt;br /&gt;
 echo &amp;quot;/usr/local/bin/flow-capture -w /usr/home/flows -S5 -N -2 0/10.1.4.203/4444&amp;quot; &amp;gt; /usr/local/etc/rc.d/flow-capture.sh&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/local/etc/rc.d/flow-capture.sh&lt;br /&gt;
&lt;br /&gt;
Edit for private IP.&lt;br /&gt;
&lt;br /&gt;
Netgraph start script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/local/etc/rc.d/netgraph.sh&lt;br /&gt;
&lt;br /&gt;
/usr/sbin/ngctl -f- &amp;lt;&amp;lt;-SEQ&lt;br /&gt;
mkpeer em0: netflow lower iface0&lt;br /&gt;
name em0:lower netflow&lt;br /&gt;
connect em0: netflow: upper out0&lt;br /&gt;
mkpeer netflow: ksocket export inet/dgram/udp&lt;br /&gt;
msg netflow:export connect inet/10.1.4.203:4444&lt;br /&gt;
SEQ&lt;br /&gt;
&lt;br /&gt;
#/usr/sbin/ngctl -f- &amp;lt;&amp;lt;-SEQ&lt;br /&gt;
#shutdown netflow:&lt;br /&gt;
#SEQ&lt;br /&gt;
&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/netgraph.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit for private IP.&lt;br /&gt;
&lt;br /&gt;
Confirm netflow is running after running scripts:&lt;br /&gt;
&amp;lt;pre&amp;gt;newbwdb /usr/ports/net-mgmt/flow-tools# /usr/sbin/ngctl&lt;br /&gt;
Available commands:&lt;br /&gt;
  config     get or set configuration of node at &amp;lt;path&amp;gt;&lt;br /&gt;
  connect    Connects hook &amp;lt;peerhook&amp;gt; of the node at &amp;lt;relpath&amp;gt; to &amp;lt;hook&amp;gt;&lt;br /&gt;
  debug      Get/set debugging verbosity level&lt;br /&gt;
  dot        Produce a GraphViz (.dot) of the entire netgraph.&lt;br /&gt;
  help       Show command summary or get more help on a specific command&lt;br /&gt;
  list       Show information about all nodes&lt;br /&gt;
  mkpeer     Create and connect a new node to the node at &amp;quot;path&amp;quot;&lt;br /&gt;
  msg        Send a netgraph control message to the node at &amp;quot;path&amp;quot;&lt;br /&gt;
  name       Assign name &amp;lt;name&amp;gt; to the node at &amp;lt;path&amp;gt;&lt;br /&gt;
  read       Read and execute commands from a file&lt;br /&gt;
  rmhook     Disconnect hook &amp;quot;hook&amp;quot; of the node at &amp;quot;path&amp;quot;&lt;br /&gt;
  show       Show information about the node at &amp;lt;path&amp;gt;&lt;br /&gt;
  shutdown   Shutdown the node at &amp;lt;path&amp;gt;&lt;br /&gt;
  status     Get human readable status information from the node at &amp;lt;path&amp;gt;&lt;br /&gt;
  types      Show information about all installed node types&lt;br /&gt;
  write      Send a data packet down the hook named by &amp;quot;hook&amp;quot;.&lt;br /&gt;
  quit       Exit program&lt;br /&gt;
+ show netflow:&lt;br /&gt;
  Name: netflow         Type: netflow         ID: 00000004   Num hooks: 3&lt;br /&gt;
  Local hook      Peer name       Peer type    Peer ID         Peer hook&lt;br /&gt;
  ----------      ---------       ---------    -------         ---------&lt;br /&gt;
  export          &amp;lt;unnamed&amp;gt;       ksocket      00000005        inet/dgram/udp&lt;br /&gt;
  out0            em0             ether        00000001        upper&lt;br /&gt;
  iface0          em0             ether        00000001        lower&lt;br /&gt;
+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that sometimes flow-capture is failing due to swap exhaustion (even after adding more swap). So we crontab flow-capture to restart (it&#039;s ok to start if it&#039;s already running, it just quits):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
crontab -e&lt;br /&gt;
#restart flow-capture&lt;br /&gt;
*/15 * * * * /usr/local/etc/rc.d/flow-capture.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== process flow tools ==== &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkdir /usr/home/flowbin&lt;br /&gt;
mkdir /usr/home/working&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install modules:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/devel/p5-Date-Calc&lt;br /&gt;
make install clean&lt;br /&gt;
cd /usr/ports/mail/p5-Mail-Sendmail&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Queue script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/home/flowbin/queue.pl&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
use strict;&lt;br /&gt;
&lt;br /&gt;
BEGIN {&lt;br /&gt;
    push @INC, &amp;quot;/usr/home/flowbin&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
use date;&lt;br /&gt;
&lt;br /&gt;
my $flowbase = &amp;quot;/usr/home/flows&amp;quot;;&lt;br /&gt;
#my $flowqueue = &amp;quot;/usr/home/queue&amp;quot;;&lt;br /&gt;
my $flowqueue = &amp;quot;/usr/home/working&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
my ($date, $time) = date::CurrentDateTime();&lt;br /&gt;
&lt;br /&gt;
my $flowdir = mkFlowDir($date);&lt;br /&gt;
`mv $flowdir/ft-* $flowqueue`;&lt;br /&gt;
&lt;br /&gt;
if (date::DateWindow($date, $time, $date, &amp;quot;00:00:00&amp;quot;, 600)) {&lt;br /&gt;
    my $newdate = date::AddDays($date, -1);&lt;br /&gt;
    my $flowdir = mkFlowDir($newdate);&lt;br /&gt;
    `mv $flowdir/ft-* $flowqueue`;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub mkFlowDir {&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    $date =~ /([0-9]{4}-[0-9]{2})/;&lt;br /&gt;
    my $yearmonth = $1;&lt;br /&gt;
    return &amp;quot;$flowbase/$yearmonth/$date&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Date.pm module:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/home/flowbin/date.pm&lt;br /&gt;
#!/usr/local/bin/perl&lt;br /&gt;
#&lt;br /&gt;
# $Header: /usr/cvs/newgw/lib/date.pm,v 1.2 2003/11/24 17:06:02 glenn Exp $&lt;br /&gt;
#&lt;br /&gt;
# Copyright (c) 2001, 2002, 2003&lt;br /&gt;
#      e-Monitoring Networks, Inc.  All rights reserved.&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# date.pl - Higher level functions written on top of Date::Calc&lt;br /&gt;
&lt;br /&gt;
package date;&lt;br /&gt;
&lt;br /&gt;
use strict;&lt;br /&gt;
use Date::Calc qw(:all);&lt;br /&gt;
&lt;br /&gt;
sub DayDiff { #calculate the difference in days from two dates&lt;br /&gt;
    my $date1 = shift;&lt;br /&gt;
    my $date2 = shift;&lt;br /&gt;
    my ($year1, $month1, $day1) = &amp;amp;DateToymd($date1);&lt;br /&gt;
    my ($year2, $month2, $day2) = &amp;amp;DateToymd($date2);&lt;br /&gt;
    my $diff = &amp;amp;Delta_Days($year1, $month1, $day1, $year2, $month2, $day2);&lt;br /&gt;
    return $diff;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub AddDays { #adds specified number of days to the supplied date&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my $days = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my ($nyear, $nmonth, $nday) = &amp;amp;Add_Delta_Days($year, $month, $day, $days);&lt;br /&gt;
    my $ndate = &amp;amp;ymdToDate($nyear, $nmonth, $nday);&lt;br /&gt;
    return $ndate;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub AddHours { #adds specified number of hours to the supplied date and time&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my $time = shift;&lt;br /&gt;
    my $addhours = shift;&lt;br /&gt;
    my $adddays = 0;&lt;br /&gt;
    if (abs($addhours / 24) &amp;gt;= 1) {&lt;br /&gt;
        $adddays = int($addhours / 24);&lt;br /&gt;
        $addhours -= $adddays * 24;&lt;br /&gt;
    }&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my ($hour, $minute, $second) = &amp;amp;TimeTohms($time);&lt;br /&gt;
    my ($ny, $nm, $nd, $nh, $nmin, $ns) = &amp;amp;Add_Delta_DHMS($year, $month, $day,&lt;br /&gt;
                                                          $hour, $minute, $second,&lt;br /&gt;
                                                          $adddays, $addhours, 0, 0);&lt;br /&gt;
    my $ndate = &amp;amp;ymdToDate($ny, $nm, $nd);&lt;br /&gt;
    my $ntime = &amp;amp;hmsToTime($nh, $nmin, $ns);&lt;br /&gt;
    return $ndate, $ntime;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub AddMinutes {&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my $time = shift;&lt;br /&gt;
    my $minutes = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my ($hour, $minute, $second) = &amp;amp;TimeTohms($time);&lt;br /&gt;
    my ($ny, $nm, $nd, $nh, $nmin, $ns) = &amp;amp;Add_Delta_DHMS($year, $month, $day,&lt;br /&gt;
                                                          $hour, $minute, $second,&lt;br /&gt;
                                                          0, 0, $minutes, 0);&lt;br /&gt;
    my $ndate = &amp;amp;ymdToDate($ny, $nm, $nd);&lt;br /&gt;
    my $ntime = &amp;amp;hmsToTime($nh, $nmin, $ns);&lt;br /&gt;
    return $ndate, $ntime;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub CurrentDateTime { #return the current date and time&lt;br /&gt;
    my ($y, $m, $d, $h, $min, $s, $z, $z, $z) = &amp;amp;System_Clock;&lt;br /&gt;
    my $date = &amp;amp;ymdToDate($y, $m, $d);&lt;br /&gt;
    my $time = &amp;amp;hmsToTime($h, $min, $s);&lt;br /&gt;
    return $date, $time;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub Currentymd { #return the current year, month and day as separate variables&lt;br /&gt;
    my ($y, $m, $d, $h, $min, $s, $z, $z, $z) = &amp;amp;System_Clock;&lt;br /&gt;
    return $y, $m, $d;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub DateToymd { #takes a date and returns year, month, day as individual values&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    if ($date =~ /([0-9]{4})-([0-9]{2})-([0-9]{2})/) {&lt;br /&gt;
        my $day = $3;&lt;br /&gt;
        my $month = $2;&lt;br /&gt;
        my $year = $1;&lt;br /&gt;
        return $year, $month, $day;&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub TimeTohms { #takes a time and return hours minutes and seconds as individual values&lt;br /&gt;
    my $time = shift;&lt;br /&gt;
    if ($time =~ /([0-9]{1,2}):([0-9]{1,2}):([0-9]{1,2})/) {&lt;br /&gt;
        my $hour = $1;&lt;br /&gt;
        my $minute = $2;&lt;br /&gt;
        my $second = $3;&lt;br /&gt;
        if ($hour !~ /[0-9]{2}/) { $hour = &amp;quot;0$hour&amp;quot;; }&lt;br /&gt;
        if ($minute !~ /[0-9]{2}/) { $minute = &amp;quot;0$minute&amp;quot;; }&lt;br /&gt;
        if ($second !~ /[0-9]{2}/) { $second = &amp;quot;0$second&amp;quot;; }&lt;br /&gt;
        return $hour, $minute, $second;&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub ymdToDate { #takes year, month, day and assembles them into our date format&lt;br /&gt;
    my $year = shift;&lt;br /&gt;
    my $month = shift;&lt;br /&gt;
    my $day = shift;&lt;br /&gt;
    if (defined($year) &amp;amp;&amp;amp; defined($month) &amp;amp;&amp;amp; defined ($day)) {&lt;br /&gt;
        $month = sprintf(&amp;quot;%02d&amp;quot;, $month);&lt;br /&gt;
        $day = sprintf(&amp;quot;%02d&amp;quot;, $day);&lt;br /&gt;
        return &amp;quot;$year-$month-$day&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub hmsToTime { #takes hour minute and second and assembles them into our time format&lt;br /&gt;
    my $hour = shift;&lt;br /&gt;
    my $minute = shift;&lt;br /&gt;
    my $second = shift;&lt;br /&gt;
    if (defined($hour) &amp;amp;&amp;amp; defined($minute) &amp;amp;&amp;amp; defined ($second)) {&lt;br /&gt;
        if ($hour !~ /[0-9]{2}/) { $hour = &amp;quot;0$hour&amp;quot;; }&lt;br /&gt;
        if ($minute !~ /[0-9]{2}/) { $minute = &amp;quot;0$minute&amp;quot;; }&lt;br /&gt;
        if ($second !~ /[0-9]{2}/) { $second = &amp;quot;0$second&amp;quot;; }&lt;br /&gt;
        return sprintf (&amp;quot;%02d:%02d:%02d&amp;quot;, $hour, $minute, $second);&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub CompareDates { #compares two date and time pairs&lt;br /&gt;
    my $date1 = shift;&lt;br /&gt;
    my $time1 = shift;&lt;br /&gt;
    my $date2 = shift;&lt;br /&gt;
    my $time2 = shift;&lt;br /&gt;
&lt;br /&gt;
    my ($year1, $month1, $day1) = &amp;amp;DateToymd($date1);&lt;br /&gt;
    my ($hour1, $minute1, $second1) = &amp;amp;TimeTohms($time1);&lt;br /&gt;
    my ($year2, $month2, $day2) = &amp;amp;DateToymd($date2);&lt;br /&gt;
    my ($hour2, $minute2, $second2) = &amp;amp;TimeTohms($time2);&lt;br /&gt;
&lt;br /&gt;
#    &amp;amp;debug(&amp;quot;$year1, $month1, $day1, $year2, $month2, $day2&amp;quot;);&lt;br /&gt;
    my $days = &amp;amp;Delta_Days($year1, $month1, $day1, $year2, $month2, $day2);&lt;br /&gt;
    if ($days &amp;gt; 0) { return 1;}&lt;br /&gt;
    if ($days &amp;lt; 0) { return -1;}&lt;br /&gt;
    if ($days == 0) { #same day, compare times&lt;br /&gt;
        my $seconds1 = $second1 + (60 * $minute1) + (3600 * $hour1);&lt;br /&gt;
        my $seconds2 = $second2 + (60 * $minute2) + (3600 * $hour2);&lt;br /&gt;
        if ($seconds1 &amp;lt; $seconds2) { return 1;}&lt;br /&gt;
        if ($seconds1 &amp;gt; $seconds2) { return -1;}&lt;br /&gt;
        if ($seconds1 == $seconds2) { return 0;}&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub DateWindow { #compares two date time pairs to see if they are &amp;lt; X seconds apart&lt;br /&gt;
    my $date1 = shift;&lt;br /&gt;
    my $time1 = shift;&lt;br /&gt;
    my $date2 = shift;&lt;br /&gt;
    my $time2 = shift;&lt;br /&gt;
    my $window = shift;&lt;br /&gt;
&lt;br /&gt;
    my ($year1, $month1, $day1) = &amp;amp;DateToymd($date1);&lt;br /&gt;
    my ($hour1, $minute1, $second1) = &amp;amp;TimeTohms($time1);&lt;br /&gt;
    my ($year2, $month2, $day2) = &amp;amp;DateToymd($date2);&lt;br /&gt;
    my ($hour2, $minute2, $second2) = &amp;amp;TimeTohms($time2);&lt;br /&gt;
&lt;br /&gt;
    my ($day, $hour, $minute, $second) =&lt;br /&gt;
        &amp;amp;Delta_DHMS($year1, $month1, $day1, $hour1, $minute1, $second1,&lt;br /&gt;
                    $year2, $month2, $day2, $hour2, $minute2, $second2);&lt;br /&gt;
    $minute *= 60;&lt;br /&gt;
    $hour *= 3600;&lt;br /&gt;
    $day *= 86400;&lt;br /&gt;
    my $total = $second + $minute + $hour + $day;&lt;br /&gt;
    if (abs($total) &amp;lt; $window) {&lt;br /&gt;
        return 1;&lt;br /&gt;
    }&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub CheckDateOrder { #takes three dates/times, returns true if they are in chronological order&lt;br /&gt;
    my $date1 = shift;&lt;br /&gt;
    my $time1 = shift;&lt;br /&gt;
    my $date2 = shift;&lt;br /&gt;
    my $time2 = shift;&lt;br /&gt;
    my $date3 = shift;&lt;br /&gt;
    my $time3 = shift;&lt;br /&gt;
    if (&amp;amp;CompareDates($date1, $time1, $date2, $time2) == -1) {&lt;br /&gt;
        return 0;&lt;br /&gt;
    }&lt;br /&gt;
    if (&amp;amp;CompareDates($date2, $time2, $date3, $time3) == -1) {&lt;br /&gt;
        return 0;&lt;br /&gt;
    }&lt;br /&gt;
    return 1;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub EpochSeconds { #calculates number of seconds since the epoch for the given date/time&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my $time = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my ($hour, $minute, $second) = &amp;amp;TimeTohms($time);&lt;br /&gt;
    my ($d, $h, $m, $s) = &amp;amp;Delta_DHMS(1970, 1, 1, 0, 0, 0,&lt;br /&gt;
                                      $year, $month, $day, $hour, $minute, $second);&lt;br /&gt;
    my $seconds = $s + (60 * $m) + (3600 * $h) + (86400 * $d);&lt;br /&gt;
    return $seconds;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub SecondsToDateTime { #converts seconds since epoch to date/time&lt;br /&gt;
    my $seconds = shift;&lt;br /&gt;
    my $days = int($seconds / 86400);&lt;br /&gt;
    $seconds -= $days * 86400;&lt;br /&gt;
    my $hours = int($seconds / 3600);&lt;br /&gt;
    $seconds -= $hours * 3600;&lt;br /&gt;
    my $minutes = int($seconds / 60);&lt;br /&gt;
    $seconds -= $minutes * 60;&lt;br /&gt;
    my ($year, $month, $day, $hour, $minute, $second) =&lt;br /&gt;
        &amp;amp;Add_Delta_DHMS(1970, 1, 1, 0, 0, 0, $days, $hours, $minutes, $seconds);&lt;br /&gt;
    $month = sprintf(&amp;quot;%02d&amp;quot;, $month);&lt;br /&gt;
    $day = sprintf(&amp;quot;%02d&amp;quot;, $day);&lt;br /&gt;
    $hour = sprintf(&amp;quot;%02d&amp;quot;, $hour);&lt;br /&gt;
    $minute = sprintf(&amp;quot;%02d&amp;quot;, $minute);&lt;br /&gt;
    $second = sprintf(&amp;quot;%02d&amp;quot;, $second);&lt;br /&gt;
    return &amp;quot;$year-$month-$day&amp;quot;, &amp;quot;$hour:$minute:$second&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub DateToDayName {&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my $name = &amp;amp;Day_of_Week_to_Text(&amp;amp;Day_of_Week($year, $month, $day));&lt;br /&gt;
    $name =~ /^[A-Za-z]{3}/;&lt;br /&gt;
    $name = $&amp;amp;;&lt;br /&gt;
    return $name;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub ValiDate {&lt;br /&gt;
    return @_;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub CheckBusinessDay { # checks to see if date is business day. 1=yes, 0=no&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    if (Day_of_Week($year,$month,$day) &amp;lt; 6) { return 1; }&lt;br /&gt;
    else { return 0; }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
1; #don&#039;t remove this line&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/home/flowbin/queue.pl&lt;br /&gt;
&lt;br /&gt;
Setup cronjob:&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
#move flow data into the queue&lt;br /&gt;
1,16,31,46 * * * * /usr/home/flowbin/queue.pl&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== flow processing: i2b ==== &lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/home/flowbin/processflows-sql.pl&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
#use strict;&lt;br /&gt;
#$debug=1;&lt;br /&gt;
#$dry=1;&lt;br /&gt;
&lt;br /&gt;
my $log = &#039;/usr/home/flowbin/discards.log&#039;;&lt;br /&gt;
&lt;br /&gt;
use Data::Dumper;&lt;br /&gt;
&lt;br /&gt;
BEGIN {&lt;br /&gt;
    push @INC, &amp;quot;/usr/home/flowbin&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#my $queuedir = &amp;quot;/usr/home/queue&amp;quot;;&lt;br /&gt;
my $queuedir = &amp;quot;/usr/home/working&amp;quot;;&lt;br /&gt;
my $archivedir = &amp;quot;/usr/home/archive&amp;quot;;&lt;br /&gt;
my $sqldir = &amp;quot;/usr/home/sql&amp;quot;;&lt;br /&gt;
my $sqldirworking = &amp;quot;/usr/home/sql/tmp&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
unless ($dry) {&lt;br /&gt;
    if (-e &amp;quot;$queuedir/.lock&amp;quot;) {&lt;br /&gt;
        open(FILE, &amp;quot;$queuedir/.lock&amp;quot;);&lt;br /&gt;
        my $pid = &amp;lt;FILE&amp;gt;;&lt;br /&gt;
        chomp($pid);&lt;br /&gt;
        close(FILE);&lt;br /&gt;
        if (kill(0, $pid)) {&lt;br /&gt;
            #another process is using the queue, bail out&lt;br /&gt;
            exit(0);&lt;br /&gt;
        }&lt;br /&gt;
        else {&lt;br /&gt;
            #dead lock file, remove it&lt;br /&gt;
            `rm $queuedir/.lock`;&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    open(FILE, &amp;quot;&amp;gt; $queuedir/.lock&amp;quot;);&lt;br /&gt;
    print FILE &amp;quot;$$\n&amp;quot;;&lt;br /&gt;
    close(FILE);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
opendir(DIR, $queuedir);&lt;br /&gt;
my @files = readdir(DIR);&lt;br /&gt;
closedir(DIR);&lt;br /&gt;
&lt;br /&gt;
foreach my $file (sort @files) {&lt;br /&gt;
    unless($file =~ /^\./) {&lt;br /&gt;
        $file =~ /([0-9]{4}-[0-9]{2}-[0-9]{2})\.([0-9]{2})([0-9]{2})([0-9]{2})/;&lt;br /&gt;
        my $date = &amp;quot;$1 $2:$3:$4&amp;quot;;&lt;br /&gt;
        my $outfile = &amp;quot;$1-$2:$3.sql&amp;quot;;&lt;br /&gt;
        unless (open (SQL, &amp;quot;+&amp;gt; $sqldirworking/$outfile&amp;quot;)) { die &amp;quot;cant open $sqldirworking/$outfile&amp;quot;; }&lt;br /&gt;
        my $condensedDate = $1;&lt;br /&gt;
        $condensedDate =~ s/-//g;&lt;br /&gt;
        my $iptotal = {};&lt;br /&gt;
        my $protototal = {};&lt;br /&gt;
        my $porttotal = {};&lt;br /&gt;
&lt;br /&gt;
        &amp;amp;debug(&amp;quot;started file $file at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        &amp;amp;debug(&amp;quot;getting raw flow data (flow-print)&amp;quot;);&lt;br /&gt;
        `cat $queuedir/$file | /usr/local/bin/flow-print -f 5 &amp;gt; /usr/home/working/tmp-$file`;&lt;br /&gt;
        &amp;amp;debug(&amp;quot;aggregating data at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        unless (open(DATA, &amp;quot;/usr/home/working/tmp-$file&amp;quot;)) { die &amp;quot;can&#039;t open: $!&amp;quot;; }&lt;br /&gt;
        LOOP: while (my $line = readline DATA) {&lt;br /&gt;
            my @d = split /[\s]+/, $line;&lt;br /&gt;
            if ($d[0] ne &#039;&#039; &amp;amp;&amp;amp; $d[0] ne &#039;Start&#039;) {&lt;br /&gt;
                my $addr = 0;&lt;br /&gt;
                my $port = 0;&lt;br /&gt;
&lt;br /&gt;
                #Start End Sif SrcIPaddress SrcP DIf DstIPaddress DstP P Fl Pkts Octets&lt;br /&gt;
                #0     1   2   3            4    5   6            7    8 9  10   11&lt;br /&gt;
                          #|&lt;br /&gt;
                          # outbound = 2, inbound = 1&lt;br /&gt;
&lt;br /&gt;
                my (@src_ip) = split &#039;\.&#039;, $d[3];&lt;br /&gt;
                my (@dst_ip) = split &#039;\.&#039;, $d[6];&lt;br /&gt;
&lt;br /&gt;
                if ($src_ip[0] == 69 &amp;amp;&amp;amp; $src_ip[1] == 55 &amp;amp;&amp;amp; ($src_ip[2] == 229 || $src_ip[2] == 231)) { # for i2b&lt;br /&gt;
                   $d[2] = 2;&lt;br /&gt;
                   # hack for outbound bulk traffic counted 2x&lt;br /&gt;
                   #if ($src_ip[2] == 231) { $d[11] /= 2; $d[10] /= 2; }&lt;br /&gt;
                }&lt;br /&gt;
                # note- this is where we filter out IPs only found at i2b&lt;br /&gt;
                elsif ($dst_ip[0] == 69 &amp;amp;&amp;amp; $dst_ip[1] == 55 &amp;amp;&amp;amp; ($dst_ip[2] == 229 || $dst_ip[2] == 231)) { # for i2b&lt;br /&gt;
                   $d[2] = 1;&lt;br /&gt;
                }&lt;br /&gt;
                else { next LOOP; }&lt;br /&gt;
&lt;br /&gt;
                if ($d[2] == 2) {&lt;br /&gt;
                    $addr = $d[3];&lt;br /&gt;
                    # if the dst-port is low, store that&lt;br /&gt;
                    if ($d[7] &amp;lt;= 1024) { $port = $d[7]; }&lt;br /&gt;
                    # if the src-port is low, store that&lt;br /&gt;
                    elsif ($d[4] &amp;lt;= 1024) { $port = $d[4]; }&lt;br /&gt;
                    else { $port = 99999; }&lt;br /&gt;
                }&lt;br /&gt;
                elsif ($d[2] == 1) {&lt;br /&gt;
                    $addr = $d[6];&lt;br /&gt;
                    # if the dst-port is high, assume its return traffic, try to store src-port if low&lt;br /&gt;
                    if ($d[7] &amp;gt; 1024) {&lt;br /&gt;
                        if ($d[4] &amp;lt;= 1024) { $port = $d[4]; }&lt;br /&gt;
                        else { $port = 99999; }&lt;br /&gt;
                    } else {&lt;br /&gt;
                        $port = $d[7];&lt;br /&gt;
                    }&lt;br /&gt;
                } else {&lt;br /&gt;
                    next LOOP;&lt;br /&gt;
                }&lt;br /&gt;
&lt;br /&gt;
                my (@ip) = split &#039;\.&#039;, $addr;&lt;br /&gt;
                unless ($ip[0] == 69) { next LOOP; }&lt;br /&gt;
                unless ($ip[1] == 55) { next LOOP; }&lt;br /&gt;
                unless ($ip[2] == 229 || $ip[2] == 231) { next LOOP; }&lt;br /&gt;
&lt;br /&gt;
                my $classC = &amp;quot;$ip[0]_$ip[1]_$ip[2]&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
#                          IP        dir&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
#&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
#&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
                $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
&lt;br /&gt;
                $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
&lt;br /&gt;
                $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        close(DATA);&lt;br /&gt;
        `rm /usr/home/working/tmp-$file`;&lt;br /&gt;
        &amp;amp;debug(&amp;quot;processing ip totals at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        foreach my $classC (keys(%{$iptotal})) {&lt;br /&gt;
            my @values;&lt;br /&gt;
            foreach my $ip (keys(%{$iptotal-&amp;gt;{$classC}})) {&lt;br /&gt;
                foreach my $dir (keys(%{$iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
                    my $octets = $iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
                    my $packets = $iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
#                    $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
                    if ($octets &amp;gt; 2147483647) {&lt;br /&gt;
                        my $ddir = $dir==1 ? &#039;in&#039; : &#039;out&#039;;&lt;br /&gt;
                        #print SQL &amp;quot;$date $ip $ddir $octets\n&amp;quot;;&lt;br /&gt;
#                        $octets = 0;&lt;br /&gt;
                    }&lt;br /&gt;
                    # dailyIpTotals.id = ip(no .&#039;s)-yyyymmdd-direction&lt;br /&gt;
                    my $id = &amp;quot;$ip-$condensedDate-$dir&amp;quot;;&lt;br /&gt;
                    $id =~ s/\.//g;&lt;br /&gt;
                    push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets)&amp;quot;;&lt;br /&gt;
                    my $sql = &amp;quot;insert into dailyIpTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
                    print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                    print SQL &amp;quot;$sql;\n&amp;quot;;&lt;br /&gt;
#                    $db-&amp;gt;query(&amp;quot;insert into ipTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets)&amp;quot;);&lt;br /&gt;
                }&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
            # break inserts into 100 records at a time&lt;br /&gt;
            &amp;amp;debug(&amp;quot;inserting $#values +1 values&amp;quot;);&lt;br /&gt;
            while ($#values &amp;gt; 0) {&lt;br /&gt;
                my $sql = &amp;quot;insert into ipTotals_$classC values &amp;quot;;&lt;br /&gt;
                my $max_index = $#values &amp;gt; 100 ? 100 : $#values;&lt;br /&gt;
                for (my $i=0; $i&amp;lt;=$max_index; $i++) {&lt;br /&gt;
                    $sql .= shift @values;&lt;br /&gt;
                    $sql .= &#039;,&#039;;&lt;br /&gt;
                }&lt;br /&gt;
                chop $sql;&lt;br /&gt;
                print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                print SQL &amp;quot;$sql;\n&amp;quot;;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
#        &amp;amp;debug(&amp;quot;processing protocol totals at &amp;quot;);&lt;br /&gt;
#        &amp;amp;debug(`date`);&lt;br /&gt;
#        foreach my $classC (keys(%{$protototal})) {&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;lock tables dailyProtoTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
#            my @values;&lt;br /&gt;
#            foreach my $ip (keys(%{$protototal-&amp;gt;{$classC}})) {&lt;br /&gt;
#                foreach my $dir (keys(%{$protototal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
#                    foreach my $proto (keys(%{$protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}})) {&lt;br /&gt;
#                        my $octets = $protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
#                        my $packets = $protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
# #                        $octets = $octets &amp;gt; 2147483647 ? 0 : $octets;&lt;br /&gt;
# #                        $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
#                        # dailyProtoTotals.id = ip(no .&#039;s)-yyyymmdd-direction-proto&lt;br /&gt;
#                        my $id = &amp;quot;$ip-$condensedDate-$dir-$proto&amp;quot;;&lt;br /&gt;
#                        $id =~ s/\.//g;&lt;br /&gt;
#                        push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets)&amp;quot;;&lt;br /&gt;
#                        my $sql = &amp;quot;insert into dailyProtoTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
#                        print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
#                        $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
# #                        $db-&amp;gt;query(&amp;quot;insert into protoTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets)&amp;quot;);&lt;br /&gt;
#                    }&lt;br /&gt;
#                }&lt;br /&gt;
#            }&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
#            my $sql = &amp;quot;insert into protoTotals_$classC values &amp;quot;;&lt;br /&gt;
#            $sql .= join &#039;,&#039;, @values;&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;lock tables protoTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
#            print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
#            $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
#        }&lt;br /&gt;
&lt;br /&gt;
        &amp;amp;debug(&amp;quot;processing port totals at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        foreach my $classC (keys(%{$porttotal})) {&lt;br /&gt;
            my @values;&lt;br /&gt;
            foreach my $ip (keys(%{$porttotal-&amp;gt;{$classC}})) {&lt;br /&gt;
                foreach my $dir (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
                    foreach my $proto (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}})) {&lt;br /&gt;
                        foreach my $port (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}})) {&lt;br /&gt;
                            my $octets = $porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
                            my $packets = $porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
    #                        $octets = $octets &amp;gt; 2147483647 ? 0 : $octets;&lt;br /&gt;
    #                        $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
&lt;br /&gt;
                            # dailyPortTotals.id = ip(no .&#039;s)-yyyymmdd-direction-protocol-port&lt;br /&gt;
                            my $id = &amp;quot;$ip-$condensedDate-$dir-$proto-$port&amp;quot;;&lt;br /&gt;
                            $id =~ s/\.//g;&lt;br /&gt;
                            push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $port, $octets, $packets)&amp;quot;;&lt;br /&gt;
                            my $sql = &amp;quot;insert into dailyPortTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $port, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
                            print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                            print SQL &amp;quot;$sql;\n&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    #                        $db-&amp;gt;query(&amp;quot;insert into portTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $port, $octets, $packets)&amp;quot;);&lt;br /&gt;
                        }&lt;br /&gt;
                    }&lt;br /&gt;
                }&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
            # break inserts into 100 records at a time&lt;br /&gt;
            &amp;amp;debug(&amp;quot;inserting $#values +1 values&amp;quot;);&lt;br /&gt;
            while ($#values &amp;gt; 0) {&lt;br /&gt;
                my $sql = &amp;quot;insert into portTotals_$classC values &amp;quot;;&lt;br /&gt;
                my $max_index = $#values &amp;gt; 100 ? 100 : $#values;&lt;br /&gt;
                for (my $i=0; $i&amp;lt;=$max_index; $i++) {&lt;br /&gt;
                    $sql .= shift @values;&lt;br /&gt;
                    $sql .= &#039;,&#039;;&lt;br /&gt;
                }&lt;br /&gt;
                chop $sql;&lt;br /&gt;
                print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                print SQL &amp;quot;$sql;\n&amp;quot;;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
#                       12     1 8      1    1= 23&lt;br /&gt;
# dailyIpTotals.id = ip(no .&#039;s)-yyyymmdd-direction&lt;br /&gt;
#                       12        1  8     1   1       3=26&lt;br /&gt;
# dailyProtoTotals.id = ip(no .&#039;s)-yyyymmdd-direction-proto&lt;br /&gt;
#                       12       1   8    1     1     5=28&lt;br /&gt;
# dailyPortTotals.id = ip(no .&#039;s)-yyyymmdd-direction-port&lt;br /&gt;
        #print &amp;quot;finished at &amp;quot;;&lt;br /&gt;
        #print `date`;&lt;br /&gt;
        `mv $queuedir/$file $archivedir` unless $dry;&lt;br /&gt;
        close(SQL);&lt;br /&gt;
        `bzip2 $sqldirworking/$outfile`;&lt;br /&gt;
        `mv $sqldirworking/$outfile.bz2 $sqldir/`;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
`rm $queuedir/.lock` unless $dry;&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $message = shift;&lt;br /&gt;
    if ($debug) {&lt;br /&gt;
        print &amp;quot;$message\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# var full during ft-v05.2005-03-28.084500-0800 and&lt;br /&gt;
# 2005-02-24 69.55.226&lt;br /&gt;
&lt;br /&gt;
# all port/daily totals before 2005-04-07&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script sends the sql files to the traffic server for processing:&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/home/flowbin/sendsql.pl&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
#use strict;&lt;br /&gt;
#$debug=1;&lt;br /&gt;
#$dry=1;&lt;br /&gt;
&lt;br /&gt;
my $remote = &amp;quot;69.55.233.199&amp;quot;;&lt;br /&gt;
my $sqldir = &amp;quot;/usr/home/sql&amp;quot;;&lt;br /&gt;
my $archive = &amp;quot;/usr/home/archive&amp;quot;;&lt;br /&gt;
my $sqldirremote = &amp;quot;/data/bwdb2/pending/&amp;quot;;&lt;br /&gt;
my @err;&lt;br /&gt;
unless ($dry) {&lt;br /&gt;
    if (-e &amp;quot;$sqldir/.lock&amp;quot;) {&lt;br /&gt;
        open(FILE, &amp;quot;$sqldir/.lock&amp;quot;);&lt;br /&gt;
        my $pid = &amp;lt;FILE&amp;gt;;&lt;br /&gt;
        chomp($pid);&lt;br /&gt;
        close(FILE);&lt;br /&gt;
        if (kill(0, $pid)) {&lt;br /&gt;
            #another process is using the queue, bail out&lt;br /&gt;
            exit(0);&lt;br /&gt;
        }&lt;br /&gt;
        else {&lt;br /&gt;
            #dead lock file, remove it&lt;br /&gt;
            `rm $sqldir/.lock`;&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    open(FILE, &amp;quot;&amp;gt; $sqldir/.lock&amp;quot;);&lt;br /&gt;
    print FILE &amp;quot;$$\n&amp;quot;;&lt;br /&gt;
    close(FILE);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
opendir(DIR, $sqldir);&lt;br /&gt;
my @files = readdir(DIR);&lt;br /&gt;
closedir(DIR);&lt;br /&gt;
&lt;br /&gt;
foreach my $file (sort @files) {&lt;br /&gt;
   next unless $file =~ /bz2$/;&lt;br /&gt;
&lt;br /&gt;
   my $r = `scp -Cq $sqldir/$file $remote:$sqldirremote 2&amp;gt;&amp;amp;1`;&lt;br /&gt;
#   print &amp;quot;scp $sqldir/$file $remote:$sqldirremote&amp;quot;;&lt;br /&gt;
   unless ($?==0) {&lt;br /&gt;
      push @err, &amp;quot;scp -Cq $sqldir/$file $remote:$sqldirremote ($r)&amp;quot;;&lt;br /&gt;
   }&lt;br /&gt;
   else {&lt;br /&gt;
      `mv $sqldir/$file $archive`;&lt;br /&gt;
      `ssh $remote mv $sqldirremote/$file $sqldirremote/${file}.done`;&lt;br /&gt;
   }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
`rm $sqldir/.lock` unless $dry;&lt;br /&gt;
&lt;br /&gt;
if (@err) {&lt;br /&gt;
   email_support(&#039;bwdb2: sendsql.pl error&#039;,join &amp;quot;\n&amp;quot;, @err);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub email_support {&lt;br /&gt;
    my $subj=shift;&lt;br /&gt;
    my $body=shift;&lt;br /&gt;
    use Mail::Sendmail;&lt;br /&gt;
&lt;br /&gt;
    # prepare message&lt;br /&gt;
    my %mail = (&lt;br /&gt;
        To      =&amp;gt; &#039;support@johncompanies.com,dave@johncompanies.com&#039;,&lt;br /&gt;
        From    =&amp;gt; &#039;support@johncompanies.com&#039;,&lt;br /&gt;
        Subject =&amp;gt; $subj,&lt;br /&gt;
        Message =&amp;gt; $body,&lt;br /&gt;
        smtp    =&amp;gt; &#039;mail.johncompanies.com&#039;,&lt;br /&gt;
    );&lt;br /&gt;
    sendmail(%mail) || warn &amp;quot;Error: $Mail::Sendmail::error&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $message = shift;&lt;br /&gt;
    if ($debug) {&lt;br /&gt;
        print &amp;quot;$message\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# var full during ft-v05.2005-03-28.084500-0800 and&lt;br /&gt;
# 2005-02-24 69.55.226&lt;br /&gt;
&lt;br /&gt;
# all port/daily totals before 2005-04-07&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
#process flows&lt;br /&gt;
2,17,32,47 * * * * /usr/home/flowbin/processflows-sql.pl&lt;br /&gt;
#move sql commands to traffic db&lt;br /&gt;
8,23,38,53 * * * * /usr/home/flowbin/sendsql.pl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== flow processing: castle ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/home/flowbin/processflows.pl&lt;br /&gt;
&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
#use strict;&lt;br /&gt;
#$debug=1;&lt;br /&gt;
#$dry=1;&lt;br /&gt;
&lt;br /&gt;
my $log = &#039;/usr/home/flowbin/discards.log&#039;;&lt;br /&gt;
&lt;br /&gt;
use Data::Dumper;&lt;br /&gt;
&lt;br /&gt;
BEGIN {&lt;br /&gt;
    push @INC, &amp;quot;/usr/home/flowbin&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
use db;&lt;br /&gt;
&lt;br /&gt;
#my $queuedir = &amp;quot;/usr/home/queue&amp;quot;;&lt;br /&gt;
my $queuedir = &amp;quot;/usr/home/working&amp;quot;;&lt;br /&gt;
my $archivedir = &amp;quot;/usr/home/archive&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
unless ($dry) {&lt;br /&gt;
    if (-e &amp;quot;$queuedir/.lock&amp;quot;) {&lt;br /&gt;
        open(FILE, &amp;quot;$queuedir/.lock&amp;quot;);&lt;br /&gt;
        my $pid = &amp;lt;FILE&amp;gt;;&lt;br /&gt;
        chomp($pid);&lt;br /&gt;
        close(FILE);&lt;br /&gt;
        if (kill(0, $pid)) {&lt;br /&gt;
            #another process is using the queue, bail out&lt;br /&gt;
            exit(0);&lt;br /&gt;
        }&lt;br /&gt;
        else {&lt;br /&gt;
            #dead lock file, remove it&lt;br /&gt;
            `rm $queuedir/.lock`;&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    open(FILE, &amp;quot;&amp;gt; $queuedir/.lock&amp;quot;);&lt;br /&gt;
    print FILE &amp;quot;$$\n&amp;quot;;&lt;br /&gt;
    close(FILE);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
my $db = db-&amp;gt;new();&lt;br /&gt;
$db-&amp;gt;connect(&#039;traffic&#039;, &#039;&#039;, &#039;root&#039;, &#039;5over3&#039;) || die $db-&amp;gt;{&#039;error&#039;};&lt;br /&gt;
&lt;br /&gt;
opendir(DIR, $queuedir);&lt;br /&gt;
my @files = readdir(DIR);&lt;br /&gt;
closedir(DIR);&lt;br /&gt;
&lt;br /&gt;
foreach my $file (sort @files) {&lt;br /&gt;
    unless($file =~ /^\./) {&lt;br /&gt;
        $file =~ /([0-9]{4}-[0-9]{2}-[0-9]{2})\.([0-9]{2})([0-9]{2})([0-9]{2})/;&lt;br /&gt;
        my $date = &amp;quot;$1 $2:$3:$4&amp;quot;;&lt;br /&gt;
        my $condensedDate = $1;&lt;br /&gt;
        $condensedDate =~ s/-//g;&lt;br /&gt;
        my $iptotal = {};&lt;br /&gt;
        my $protototal = {};&lt;br /&gt;
        my $porttotal = {};&lt;br /&gt;
&lt;br /&gt;
        &amp;amp;debug(&amp;quot;started file $file at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        &amp;amp;debug(&amp;quot;getting raw flow data (flow-print)&amp;quot;);&lt;br /&gt;
        `cat $queuedir/$file | /usr/local/bin/flow-print -f 5 &amp;gt; /usr/home/working/tmp-$file`;&lt;br /&gt;
        &amp;amp;debug(&amp;quot;aggregating data at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        unless (open(DATA, &amp;quot;/usr/home/working/tmp-$file&amp;quot;)) { die &amp;quot;can&#039;t open: $!&amp;quot;; }&lt;br /&gt;
        LOOP: while (my $line = readline DATA) {&lt;br /&gt;
            my @d = split /[\s]+/, $line;&lt;br /&gt;
            if ($d[0] ne &#039;&#039; &amp;amp;&amp;amp; $d[0] ne &#039;Start&#039;) {&lt;br /&gt;
                my $addr = 0;&lt;br /&gt;
                my $port = 0;&lt;br /&gt;
&lt;br /&gt;
                #Start End Sif SrcIPaddress SrcP DIf DstIPaddress DstP P Fl Pkts Octets&lt;br /&gt;
                #0     1   2   3            4    5   6            7    8 9  10   11&lt;br /&gt;
                          #|&lt;br /&gt;
                          # outbound = 2, inbound = 1&lt;br /&gt;
&lt;br /&gt;
                my (@src_ip) = split &#039;\.&#039;, $d[3];&lt;br /&gt;
                my (@dst_ip) = split &#039;\.&#039;, $d[6];&lt;br /&gt;
&lt;br /&gt;
                if ($src_ip[0] == 69 &amp;amp;&amp;amp; $src_ip[1] == 55 &amp;amp;&amp;amp;&lt;br /&gt;
                    $src_ip[2] &amp;gt;= 224 &amp;amp;&amp;amp; $src_ip[2] &amp;lt;= 239 &amp;amp;&amp;amp;&lt;br /&gt;
                    $src_ip[2] != 229 &amp;amp;&amp;amp; $src_ip[2] != 231) { # for castle&lt;br /&gt;
#                if ($src_ip[0] == 69 &amp;amp;&amp;amp; $src_ip[1] == 55 &amp;amp;&amp;amp; $src_ip[2] == 229) { # for i2b&lt;br /&gt;
                   $d[2] = 2;&lt;br /&gt;
                   # hack for outbound bulk traffic counted 2x&lt;br /&gt;
                   if ($dst_ip[2] == 234) { $d[11] /= 2; $d[10] /= 2; }&lt;br /&gt;
                }&lt;br /&gt;
                elsif ($dst_ip[0] == 69 &amp;amp;&amp;amp; $dst_ip[1] == 55 &amp;amp;&amp;amp;&lt;br /&gt;
                       $dst_ip[2] &amp;gt;= 224 &amp;amp;&amp;amp; $dst_ip[2] &amp;lt;= 239 &amp;amp;&amp;amp;&lt;br /&gt;
                       $dst_ip[2] != 229 &amp;amp;&amp;amp; $dst_ip[2] != 231) { # for castle&lt;br /&gt;
#                elsif ($dst_ip[0] == 69 &amp;amp;&amp;amp; $dst_ip[1] == 55 &amp;amp;&amp;amp; $dst_ip[2] == 229) { # for i2b&lt;br /&gt;
                   $d[2] = 1;&lt;br /&gt;
                }&lt;br /&gt;
                else { next LOOP; }&lt;br /&gt;
&lt;br /&gt;
                if ($d[2] == 2) {&lt;br /&gt;
                    $addr = $d[3];&lt;br /&gt;
                    # if the dst-port is low, store that&lt;br /&gt;
                    if ($d[7] &amp;lt;= 1024) { $port = $d[7]; }&lt;br /&gt;
                    # if the src-port is low, store that&lt;br /&gt;
                    elsif ($d[4] &amp;lt;= 1024) { $port = $d[4]; }&lt;br /&gt;
                    else { $port = 99999; }&lt;br /&gt;
                }&lt;br /&gt;
                elsif ($d[2] == 1) {&lt;br /&gt;
                    $addr = $d[6];&lt;br /&gt;
                    # if the dst-port is high, assume its return traffic, try to store src-port if low&lt;br /&gt;
                    if ($d[7] &amp;gt; 1024) {&lt;br /&gt;
                        if ($d[4] &amp;lt;= 1024) { $port = $d[4]; }&lt;br /&gt;
                        else { $port = 99999; }&lt;br /&gt;
                    } else {&lt;br /&gt;
                        $port = $d[7];&lt;br /&gt;
                    }&lt;br /&gt;
                } else {&lt;br /&gt;
                    next LOOP;&lt;br /&gt;
                }&lt;br /&gt;
&lt;br /&gt;
                my (@ip) = split &#039;\.&#039;, $addr;&lt;br /&gt;
                unless ($ip[0] == 69) { next LOOP; }&lt;br /&gt;
                unless ($ip[1] == 55) { next LOOP; }&lt;br /&gt;
                unless ($ip[2] &amp;gt;= 224 &amp;amp;&amp;amp; $ip[2] &amp;lt;= 239 &amp;amp;&amp;amp; $ip[2] != 229 &amp;amp;&amp;amp; $ip[2] != 231) { next LOOP; }&lt;br /&gt;
#                unless ($ip[2] == 229) { next LOOP; }&lt;br /&gt;
&lt;br /&gt;
                my $classC = &amp;quot;$ip[0]_$ip[1]_$ip[2]&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
#                          IP        dir&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
#&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
#&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
                $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
&lt;br /&gt;
                $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
&lt;br /&gt;
                $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        close(DATA);&lt;br /&gt;
        `rm /usr/home/working/tmp-$file`;&lt;br /&gt;
        &amp;amp;debug(&amp;quot;processing ip totals at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        foreach my $classC (keys(%{$iptotal})) {&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;lock tables dailyIpTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
            my @values;&lt;br /&gt;
            foreach my $ip (keys(%{$iptotal-&amp;gt;{$classC}})) {&lt;br /&gt;
                foreach my $dir (keys(%{$iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
                    my $octets = $iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
                    my $packets = $iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
#                    $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
                    if ($octets &amp;gt; 2147483647) {&lt;br /&gt;
                        my $ddir = $dir==1 ? &#039;in&#039; : &#039;out&#039;;&lt;br /&gt;
                        `echo &amp;quot;$date $ip $ddir $octets\n&amp;quot; &amp;gt;&amp;gt; $log`;&lt;br /&gt;
#                        $octets = 0;&lt;br /&gt;
                    }&lt;br /&gt;
                    # dailyIpTotals.id = ip(no .&#039;s)-yyyymmdd-direction&lt;br /&gt;
                    my $id = &amp;quot;$ip-$condensedDate-$dir&amp;quot;;&lt;br /&gt;
                    $id =~ s/\.//g;&lt;br /&gt;
                    push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets)&amp;quot;;&lt;br /&gt;
                    my $sql = &amp;quot;insert into dailyIpTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
                    print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                    $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
#                    $db-&amp;gt;query(&amp;quot;insert into ipTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets)&amp;quot;);&lt;br /&gt;
                }&lt;br /&gt;
            }&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;lock tables ipTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
            # break inserts into 100 records at a time&lt;br /&gt;
            &amp;amp;debug(&amp;quot;inserting $#values +1 values&amp;quot;);&lt;br /&gt;
            while ($#values &amp;gt; 0) {&lt;br /&gt;
                my $sql = &amp;quot;insert into ipTotals_$classC values &amp;quot;;&lt;br /&gt;
                my $max_index = $#values &amp;gt; 100 ? 100 : $#values;&lt;br /&gt;
                for (my $i=0; $i&amp;lt;=$max_index; $i++) {&lt;br /&gt;
                    $sql .= shift @values;&lt;br /&gt;
                    $sql .= &#039;,&#039;;&lt;br /&gt;
                }&lt;br /&gt;
                chop $sql;&lt;br /&gt;
                print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
            }&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        sleep 20;&lt;br /&gt;
#        &amp;amp;debug(&amp;quot;processing protocol totals at &amp;quot;);&lt;br /&gt;
#        &amp;amp;debug(`date`);&lt;br /&gt;
#        foreach my $classC (keys(%{$protototal})) {&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;lock tables dailyProtoTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
#            my @values;&lt;br /&gt;
#            foreach my $ip (keys(%{$protototal-&amp;gt;{$classC}})) {&lt;br /&gt;
#                foreach my $dir (keys(%{$protototal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
#                    foreach my $proto (keys(%{$protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}})) {&lt;br /&gt;
#                        my $octets = $protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
#                        my $packets = $protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
# #                        $octets = $octets &amp;gt; 2147483647 ? 0 : $octets;&lt;br /&gt;
# #                        $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
#                        # dailyProtoTotals.id = ip(no .&#039;s)-yyyymmdd-direction-proto&lt;br /&gt;
#                        my $id = &amp;quot;$ip-$condensedDate-$dir-$proto&amp;quot;;&lt;br /&gt;
#                        $id =~ s/\.//g;&lt;br /&gt;
#                        push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets)&amp;quot;;&lt;br /&gt;
#                        my $sql = &amp;quot;insert into dailyProtoTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
#                        print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
#                        $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
# #                        $db-&amp;gt;query(&amp;quot;insert into protoTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets)&amp;quot;);&lt;br /&gt;
#                    }&lt;br /&gt;
#                }&lt;br /&gt;
#            }&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
#            my $sql = &amp;quot;insert into protoTotals_$classC values &amp;quot;;&lt;br /&gt;
#            $sql .= join &#039;,&#039;, @values;&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;lock tables protoTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
#            print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
#            $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
#        }&lt;br /&gt;
&lt;br /&gt;
        &amp;amp;debug(&amp;quot;processing port totals at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        foreach my $classC (keys(%{$porttotal})) {&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;lock tables dailyPortTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
            my @values;&lt;br /&gt;
            foreach my $ip (keys(%{$porttotal-&amp;gt;{$classC}})) {&lt;br /&gt;
                foreach my $dir (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
                    foreach my $proto (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}})) {&lt;br /&gt;
                        foreach my $port (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}})) {&lt;br /&gt;
                            my $octets = $porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
                            my $packets = $porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
    #                        $octets = $octets &amp;gt; 2147483647 ? 0 : $octets;&lt;br /&gt;
    #                        $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
&lt;br /&gt;
                            # dailyPortTotals.id = ip(no .&#039;s)-yyyymmdd-direction-protocol-port&lt;br /&gt;
                            my $id = &amp;quot;$ip-$condensedDate-$dir-$proto-$port&amp;quot;;&lt;br /&gt;
                            $id =~ s/\.//g;&lt;br /&gt;
                            push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $port, $octets, $packets)&amp;quot;;&lt;br /&gt;
                            my $sql = &amp;quot;insert into dailyPortTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $port, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
                            print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                            $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
    #                        $db-&amp;gt;query(&amp;quot;insert into portTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $port, $octets, $packets)&amp;quot;);&lt;br /&gt;
                        }&lt;br /&gt;
                    }&lt;br /&gt;
                }&lt;br /&gt;
            }&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;lock tables portTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
            # break inserts into 100 records at a time&lt;br /&gt;
            &amp;amp;debug(&amp;quot;inserting $#values +1 values&amp;quot;);&lt;br /&gt;
            while ($#values &amp;gt; 0) {&lt;br /&gt;
                my $sql = &amp;quot;insert into portTotals_$classC values &amp;quot;;&lt;br /&gt;
                my $max_index = $#values &amp;gt; 100 ? 100 : $#values;&lt;br /&gt;
                for (my $i=0; $i&amp;lt;=$max_index; $i++) {&lt;br /&gt;
                    $sql .= shift @values;&lt;br /&gt;
                    $sql .= &#039;,&#039;;&lt;br /&gt;
                }&lt;br /&gt;
                chop $sql;&lt;br /&gt;
                print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
            }&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
            sleep 10;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
#                       12     1 8      1    1= 23&lt;br /&gt;
# dailyIpTotals.id = ip(no .&#039;s)-yyyymmdd-direction&lt;br /&gt;
#                       12        1  8     1   1       3=26&lt;br /&gt;
# dailyProtoTotals.id = ip(no .&#039;s)-yyyymmdd-direction-proto&lt;br /&gt;
#                       12       1   8    1     1     5=28&lt;br /&gt;
# dailyPortTotals.id = ip(no .&#039;s)-yyyymmdd-direction-port&lt;br /&gt;
        #print &amp;quot;finished at &amp;quot;;&lt;br /&gt;
        #print `date`;&lt;br /&gt;
        `mv $queuedir/$file $archivedir` unless $dry;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
`rm $queuedir/.lock` unless $dry;&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $message = shift;&lt;br /&gt;
    if ($debug) {&lt;br /&gt;
        print &amp;quot;$message\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# var full during ft-v05.2005-03-28.084500-0800 and&lt;br /&gt;
# 2005-02-24 69.55.226&lt;br /&gt;
&lt;br /&gt;
# all port/daily totals before 2005-04-07&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/home/flowbin/db.pm&lt;br /&gt;
&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
#&lt;br /&gt;
# $Header: /usr/cvs/newgw/lib/db.pm,v 1.4 2003/06/05 18:20:01 glenn Exp $&lt;br /&gt;
#&lt;br /&gt;
# Copyright (c) 2003&lt;br /&gt;
#      e-Monitoring Networks, Inc.  All rights reserved.&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
package db;&lt;br /&gt;
&lt;br /&gt;
use strict;&lt;br /&gt;
use DBI;&lt;br /&gt;
&lt;br /&gt;
sub new {&lt;br /&gt;
    my $class = shift;&lt;br /&gt;
    my $self = {};&lt;br /&gt;
&lt;br /&gt;
    $self-&amp;gt;{&#039;debug&#039;} = 0;&lt;br /&gt;
    bless $self, $class;&lt;br /&gt;
&lt;br /&gt;
    return $self;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub connect {&lt;br /&gt;
    my $self = shift;&lt;br /&gt;
    my $dbname = shift;&lt;br /&gt;
    my $dbhost = shift;&lt;br /&gt;
    my $dbuser = shift;&lt;br /&gt;
    my $dbpass = shift;&lt;br /&gt;
&lt;br /&gt;
    my $host = &#039;&#039;;&lt;br /&gt;
    if (defined($dbhost)) {&lt;br /&gt;
        $host = &amp;quot;;host=$dbhost&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eval {&lt;br /&gt;
        $self-&amp;gt;debug(&amp;quot;connecting to: DBI:mysql:database=$dbname;$host&amp;quot;, 1);&lt;br /&gt;
        $self-&amp;gt;{&#039;dbh&#039;} = DBI-&amp;gt;connect(&amp;quot;DBI:mysql:database=$dbname;$host&amp;quot;, $dbuser, $dbpass);&lt;br /&gt;
    };&lt;br /&gt;
    if ($self-&amp;gt;{&#039;dbh&#039;}) {&lt;br /&gt;
        return 1;&lt;br /&gt;
    }&lt;br /&gt;
    $self-&amp;gt;{&#039;error&#039;} = &amp;quot;Error connecting to database $@&amp;quot;;&lt;br /&gt;
    $self-&amp;gt;debug(&amp;quot;Error connecting to database $@&amp;quot;);&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub query {&lt;br /&gt;
    my $self = shift;&lt;br /&gt;
    my $query = shift;&lt;br /&gt;
&lt;br /&gt;
    $self-&amp;gt;debug($query, 1);&lt;br /&gt;
    my $sth;&lt;br /&gt;
    eval {&lt;br /&gt;
        $sth = $self-&amp;gt;{&#039;dbh&#039;}-&amp;gt;prepare($query);&lt;br /&gt;
    };&lt;br /&gt;
    unless ($sth) {&lt;br /&gt;
        $self-&amp;gt;{&#039;error&#039;} = &amp;quot;error preparing query $@&amp;quot;;&lt;br /&gt;
        $self-&amp;gt;debug(&amp;quot;error preparing query $@&amp;quot;);&lt;br /&gt;
        return undef;&lt;br /&gt;
    }&lt;br /&gt;
    my $qty;&lt;br /&gt;
    eval {&lt;br /&gt;
        $qty = $sth-&amp;gt;execute;&lt;br /&gt;
    };&lt;br /&gt;
    unless ($qty) {&lt;br /&gt;
        $self-&amp;gt;{&#039;error&#039;} = &amp;quot;error executing query $@&amp;quot;;&lt;br /&gt;
        warn &amp;quot;error executing query $@ $query&amp;quot;;&lt;br /&gt;
        return undef;&lt;br /&gt;
    }&lt;br /&gt;
    $self-&amp;gt;debug(&amp;quot;returning $qty, $sth from query&amp;quot;, 6);&lt;br /&gt;
    return ($qty, $sth);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub disconnect {&lt;br /&gt;
    my $self = shift;&lt;br /&gt;
&lt;br /&gt;
    $self-&amp;gt;{&#039;dbh&#039;}-&amp;gt;disconnect;&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $self = shift;&lt;br /&gt;
    my $msg = shift;&lt;br /&gt;
    my $level = shift || 0;&lt;br /&gt;
&lt;br /&gt;
    if ($level &amp;lt; $self-&amp;gt;{&#039;debug&#039;}) {&lt;br /&gt;
        print &amp;quot;$msg\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
1;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 mkdir /usr/home/archive&lt;br /&gt;
 mkdir -p /usr/home/sql/tmp&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
#process flows&lt;br /&gt;
2,17,32,47 * * * * /usr/home/flowbin/processflows.pl&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== setup traffic db ====&lt;br /&gt;
* Install mysql:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/databases/mysql50-server&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt;&amp;gt; /etc/rc.conf&lt;br /&gt;
 mysql_enable=&amp;quot;YES&amp;quot; &lt;br /&gt;
&lt;br /&gt;
Move db data dir:&lt;br /&gt;
 /usr/local/etc/rc.d/mysql-server stop&lt;br /&gt;
 mkdir /usr/home/database/&lt;br /&gt;
 mv /var/db/mysql/* /usr/home/database/&lt;br /&gt;
 chown -R mysql:mysql /usr/home/database&lt;br /&gt;
&lt;br /&gt;
Edit database location in startup script:&lt;br /&gt;
 vi /usr/local/etc/rc.d/mysql-server&lt;br /&gt;
 # : ${mysql_dbdir=&amp;quot;/var/db/mysql&amp;quot;}&lt;br /&gt;
 : ${mysql_dbdir=&amp;quot;/usr/home/database&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
 /usr/local/etc/rc.d/mysql-server start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Install mysql perl database modules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/databases/p5-DBI&lt;br /&gt;
make install clean&lt;br /&gt;
cd /usr/ports/databases/p5-DBD-mysql50&lt;br /&gt;
make install clean&lt;br /&gt;
(no to SSL support)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Setting up database&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rehash&lt;br /&gt;
/usr/local/etc/rc.d/mysql-server start&lt;br /&gt;
mysql -u root&lt;br /&gt;
create database traffic;&lt;br /&gt;
grant all on *.* to root@localhost identified by &#039;5over3&#039;;&lt;br /&gt;
grant all on traffic.* to jc@10.1.4.5 identified by &#039;2gMKY3Wt&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this was a new server we&#039;d setup new tables. See [[#mysql_2|mysql]] for how those tables would be setup.&lt;br /&gt;
&lt;br /&gt;
We are assuming here we are moving data from an existing db, here&#039;s how that&#039;s done (from the current traffic db):&lt;br /&gt;
 rsync -av --progress /usr/home/database/traffic/ 10.1.4.203:/usr/home/database/traffic/&lt;br /&gt;
&lt;br /&gt;
When you&#039;re ready to do the cutover, shut down mysql on both hosts and do one last sync.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== process flows from bwdb2 ====&lt;br /&gt;
On traffic database server (bwdb):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
#import sql from bwdb2&lt;br /&gt;
10,25,40,55 * * * * /usr/home/flowbin/processsql.pl&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add access to mysql: &lt;br /&gt;
&amp;lt;pre&amp;gt;mysql -u root -p &lt;br /&gt;
grant all on traffic.* to bwdb2@localhost identified by &#039;s1lver4d&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/home/flowbin/processsql.pl&lt;br /&gt;
&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
#use strict;&lt;br /&gt;
#$debug=1;&lt;br /&gt;
#$dry=1;&lt;br /&gt;
&lt;br /&gt;
my $sqldir = &amp;quot;/usr/home/bwdb2/pending&amp;quot;;&lt;br /&gt;
my $mysql = &#039;/usr/local/bin/mysql&#039;;&lt;br /&gt;
my @err;&lt;br /&gt;
unless ($dry) {&lt;br /&gt;
    if (-e &amp;quot;$sqldir/.lock&amp;quot;) {&lt;br /&gt;
        open(FILE, &amp;quot;$sqldir/.lock&amp;quot;);&lt;br /&gt;
        my $pid = &amp;lt;FILE&amp;gt;;&lt;br /&gt;
        chomp($pid);&lt;br /&gt;
        close(FILE);&lt;br /&gt;
        if (kill(0, $pid)) {&lt;br /&gt;
            #another process is using the queue, bail out&lt;br /&gt;
            exit(0);&lt;br /&gt;
        }&lt;br /&gt;
        else {&lt;br /&gt;
            #dead lock file, remove it&lt;br /&gt;
            `rm $sqldir/.lock`;&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    open(FILE, &amp;quot;&amp;gt; $sqldir/.lock&amp;quot;);&lt;br /&gt;
    print FILE &amp;quot;$$\n&amp;quot;;&lt;br /&gt;
    close(FILE);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
opendir(DIR, $sqldir);&lt;br /&gt;
my @files = readdir(DIR);&lt;br /&gt;
closedir(DIR);&lt;br /&gt;
&lt;br /&gt;
foreach my $file (sort @files) {&lt;br /&gt;
   next unless $file =~ /done$/;&lt;br /&gt;
   my $r = `bzcat $sqldir/$file | $mysql -u bwdb2 -ps1lver4d traffic`;&lt;br /&gt;
   unless ($?==0) {&lt;br /&gt;
      push @err, &amp;quot;bzcat $sqldir/$file | $mysql -u bwdb2 -pxxxxx traffic ($r)&amp;quot;;&lt;br /&gt;
   }&lt;br /&gt;
   else {&lt;br /&gt;
      `rm $sqldir/$file`;&lt;br /&gt;
   }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
`rm $sqldir/.lock` unless $dry;&lt;br /&gt;
&lt;br /&gt;
if (@err) {&lt;br /&gt;
   email_support(&#039;bwdb: processsql.pl error&#039;,join &amp;quot;\n&amp;quot;, @err);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub email_support {&lt;br /&gt;
    my $subj=shift;&lt;br /&gt;
    my $body=shift;&lt;br /&gt;
    use Mail::Sendmail;&lt;br /&gt;
&lt;br /&gt;
    # prepare message&lt;br /&gt;
    my %mail = (&lt;br /&gt;
        To      =&amp;gt; &#039;dave@johncompanies.com&#039;,&lt;br /&gt;
        From    =&amp;gt; &#039;support@johncompanies.com&#039;,&lt;br /&gt;
        Subject =&amp;gt; $subj,&lt;br /&gt;
        Message =&amp;gt; $body,&lt;br /&gt;
        smtp    =&amp;gt; &#039;mail.johncompanies.com&#039;,&lt;br /&gt;
    );&lt;br /&gt;
    sendmail(%mail) || warn &amp;quot;Error: $Mail::Sendmail::error&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $message = shift;&lt;br /&gt;
    if ($debug) {&lt;br /&gt;
        print &amp;quot;$message\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/home/flowbin/processsql.pl&lt;br /&gt;
&lt;br /&gt;
Make sure bwdb is reachable from the outside only to bwdb2:&lt;br /&gt;
&lt;br /&gt;
On nat, add to &amp;lt;tt&amp;gt;/etc/ipnat.rules&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# bwdb&lt;br /&gt;
bimap fxp0 10.1.4.203/32 -&amp;gt; 69.55.233.199/32&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reload:&lt;br /&gt;
 ipnat -C -F -f /etc/ipnat.rules&lt;br /&gt;
&lt;br /&gt;
Setup firewall rule on firewall:&lt;br /&gt;
 ipfw add 00094 allow ip from 66.181.18.5 to 69.55.233.199 22&lt;br /&gt;
 ipfw add 00094 deny ip from any to 69.55.233.199&lt;br /&gt;
&lt;br /&gt;
Setup firewall on bwdb to restrict access now that it&#039;s nat&#039;d:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt;&amp;gt; /usr/local/etc/rc.d/boot.sh&lt;br /&gt;
ipfw add 1 allow tcp from any to any established&lt;br /&gt;
ipfw add 2 allow ip from 10.1.4.0/24,66.181.18.5,69.55.233.195 to me 22&lt;br /&gt;
ipfw add 3 allow ip from 10.1.4.5 to me 3306&lt;br /&gt;
ipfw add 4 allow ip from 69.55.225.225 53 to me &lt;br /&gt;
ipfw add 5 allow ip from 69.55.230.2 25 to me &lt;br /&gt;
ipfw add 6 allow ip from me to me 4444&lt;br /&gt;
ipfw add 7 allow icmp from any to me&lt;br /&gt;
ipfw add 8 allow udp from 10.1.4.203 to 10.1.4.203 dst-port 4444&lt;br /&gt;
ipfw add 9 allow udp from 10.1.4.5 to me 161&lt;br /&gt;
ipfw add 100 deny ip from any to me&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/local/etc/rc.d/boot.sh&lt;br /&gt;
&lt;br /&gt;
From bwdb2, add ssh key:&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh 69.55.233.199 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
Confirm no password access:&lt;br /&gt;
 ssh 69.55.233.199 hostname&lt;br /&gt;
&lt;br /&gt;
= bwdb2 =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine tracks and stores network traffic (netflow) at i2b. It is our means to monitor customer bandwidth usage.&lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab6&lt;br /&gt;
* OS: FreeBSD 6.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.2.4 There are 2 onboard nic&#039;s, one of which is the &amp;quot;listener&amp;quot;&lt;br /&gt;
* Hardware: Custom 2U. Single power supply. &lt;br /&gt;
* Drives: two 150 GB (2 x 150GB) RAID1 arrays running on a 3ware 7006 RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* netflow&lt;br /&gt;
* bigbrother &lt;br /&gt;
&lt;br /&gt;
== netflow ==&lt;br /&gt;
&lt;br /&gt;
The main function of this server is to run netflow on an eth device in promiscuous mode so as to hear everything happening on the port (wherein all network traffic is mirrored to that port via the cisco swith). Every 15min, it creates a flow file under &amp;lt;tt&amp;gt;/usr/home/flows/&amp;lt;/tt&amp;gt; (organized by date). The flow file contains all traffic data for a 15min increment of time.&lt;br /&gt;
&lt;br /&gt;
A cronjob moves that flow file (or files if there are multiple due to some delay) &lt;br /&gt;
 1,16,31,46 * * * * /usr/home/flowbin/queue.pl&lt;br /&gt;
&lt;br /&gt;
into a processing queue:&lt;br /&gt;
&amp;lt;tt&amp;gt;/usr/home/working&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then a separate file processes whatever flow files it finds there, and builds sql files ready for insertion into the traffic database:&lt;br /&gt;
 2,17,32,47 * * * * /usr/home/flowbin/processflows-sql.pl&lt;br /&gt;
&lt;br /&gt;
Then yet another process copies the sql files to the traffic database server for processing and insertion into the mysql database:&lt;br /&gt;
 8,23,38,53 * * * * /usr/home/flowbin/sendsql.pl&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Adaptec_Controllers|Check RAID array]]&lt;br /&gt;
&lt;br /&gt;
* if space becomes tight, move sql files and flow files to backup server, both located in &amp;lt;tt&amp;gt;/usr/home/flowbin/archive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= firewall (newgateway) =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine is the primary (only) firewall for the entire network at castle. &lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-8&lt;br /&gt;
* OS: FreeBSD 4.11 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.223, Pub IPs: 69.55.233.164 (external), 69.55.233.156 (internal). It has 3 network connections (2 onboard, 1 PCI) connecting to the external, internal and private networks. If you&#039;re looking at the back of the server, the internal-network-facing nic is on the right (em1), and the external-facing-network (3750) is on the left (em0).&lt;br /&gt;
* Hardware: 6 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: 36 GB (2 x 36GB) RAID1 array running on an Adaptec 2120S PCI RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* firewall (ipfw)&lt;br /&gt;
* snmp&lt;br /&gt;
* bigbrother&lt;br /&gt;
&lt;br /&gt;
== Firewall Rule Configuration ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Firewall_Rule_Configuration|Firewall Rule Configuration]] for more discussion on how to actually manipulate firewall rules.&lt;br /&gt;
&lt;br /&gt;
== Disaster Recovery ==&lt;br /&gt;
&lt;br /&gt;
If there is ever an outage with the firewall, the old firewall &amp;quot;gate&amp;quot; is located just below and is running with the proper network configuration, but with no firewall rules in place (to facilitate good throughput). Have castle move the cable on the left on the current firewall to the left port in the old firewall and the right cable to the right port.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what you need to put in /etc/rc.conf to get a firewall going (as far as routes and IPs)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;hostname=&amp;quot;newgateway.johncompanies.com&amp;quot;&lt;br /&gt;
firewall_script=&amp;quot;/etc/firewall.sh&amp;quot;&lt;br /&gt;
firewall_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NONE&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
#snmpd_flags=&amp;quot;-as -p /var/run/snmpd.pid&amp;quot;&lt;br /&gt;
#ipnat_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
#ipnat_rules=&amp;quot;/etc/ipnat.rules&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;69.55.233.161&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_xl0=&amp;quot;inet 10.1.4.223 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;inet 69.55.233.164 netmask 255.255.255.248&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# Original JohnCompanies 69.55.224.0/20&lt;br /&gt;
#&lt;br /&gt;
ifconfig_em1=&amp;quot;inet 69.55.233.156 netmask 255.255.255.248&amp;quot;&lt;br /&gt;
&lt;br /&gt;
static_routes=&amp;quot;route1 route2 route3 route4 route5 route6 route7 route8 route9 route10 route11 route1&lt;br /&gt;
2 route13 route14 route15 route16 route17 route18&amp;quot;&lt;br /&gt;
&lt;br /&gt;
route_route1=&amp;quot;-net 69.55.224.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route2=&amp;quot;-net 69.55.225.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route3=&amp;quot;-net 69.55.226.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route4=&amp;quot;-net 69.55.227.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route5=&amp;quot;-net 69.55.228.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route6=&amp;quot;-net 69.55.229.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route7=&amp;quot;-net 69.55.230.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route8=&amp;quot;-net 69.55.231.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route9=&amp;quot;-net 69.55.232.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route10=&amp;quot;-net 69.55.233.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route11=&amp;quot;-net 69.55.234.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route12=&amp;quot;-net 69.55.235.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route13=&amp;quot;-net 69.55.236.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route14=&amp;quot;-net 69.55.237.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route15=&amp;quot;-net 69.55.238.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route16=&amp;quot;-net 69.55.239.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route17=&amp;quot;-net 10.1.5.0 10.1.4.2&amp;quot;&lt;br /&gt;
route_route18=&amp;quot;-net 10.1.6.0 10.1.4.2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#In case of 3750 failure:&lt;br /&gt;
#defaultrouter=&amp;quot;69.43.128.81&amp;quot;&lt;br /&gt;
#ifconfig_em0=&amp;quot;inet 69.43.129.84 netmask 255.255.255.248&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#bind .1&#039;s here:&lt;br /&gt;
#ifconfig_em1=&amp;quot;inet 69.55.224.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias0=&amp;quot;inet 69.55.225.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias1=&amp;quot;inet 69.55.226.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias2=&amp;quot;inet 69.55.227.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias3=&amp;quot;inet 69.55.228.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias4=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias5=&amp;quot;inet 69.55.230.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias6=&amp;quot;inet 69.55.231.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias7=&amp;quot;inet 69.55.232.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias8=&amp;quot;inet 69.55.233.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias9=&amp;quot;inet 69.55.234.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias10=&amp;quot;inet 69.55.235.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias11=&amp;quot;inet 69.55.236.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias12=&amp;quot;inet 69.55.237.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias13=&amp;quot;inet 69.55.238.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias14=&amp;quot;inet 69.55.239.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#bulk:&lt;br /&gt;
# reassign 69.55.231.1 to the int iface on the firewall&lt;br /&gt;
# set the DG on the firewall to 69.43.138.9&lt;br /&gt;
# set the ext firewall IP to 69.43.138.12, NM: 255.255.255.248&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
 1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
Backup to backup1&lt;br /&gt;
&lt;br /&gt;
 0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
Reset counters and remove pipe rules on the 1st of the month. Pay attention when setting up a rule as 3 4 5 (that&#039;s not a temporary traffic cap). &lt;br /&gt;
&lt;br /&gt;
Inside &amp;lt;tt&amp;gt;/etc/daily.local&amp;lt;/tt&amp;gt; you will see a call to &amp;lt;tt&amp;gt;/etc/makepiperules.pl&amp;lt;/tt&amp;gt;&lt;br /&gt;
This script will create &amp;lt;tt&amp;gt;/etc/firewall.sh&amp;lt;/tt&amp;gt; which contains all the firewall and pipe rules in place at the time the script was run.&lt;br /&gt;
&lt;br /&gt;
== DOS attacks ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Handling_a_DoS_attack|Handling a DoS attack]] regarding how to handle a DOS attack.&lt;br /&gt;
&lt;br /&gt;
Theres a background process (running from user shell) that monitors the firewall for incoming UDP DoS attacks. When it notices packets above a certain level it will&lt;br /&gt;
# enter a rule that allows all UDP to go through&lt;br /&gt;
# send an emergency email to support and indicating an attack is in progress&lt;br /&gt;
# send an email to castle (nocstaff@castleaccess.com and jcsupport@castleaccess.com) telling them to investigate and put up a null if warranted&lt;br /&gt;
# wait for a couple minutes to see if the attack subsides- if so it will remove the pass-all UDP rule, if not it will repeat the process from #1&lt;br /&gt;
This file lives under /usr/home/user/doswatch.pl&lt;br /&gt;
To run:&lt;br /&gt;
 cd /usr/home/user&lt;br /&gt;
 ./doswatch.pl &amp;amp;&lt;br /&gt;
&lt;br /&gt;
To kill;&lt;br /&gt;
 fg&lt;br /&gt;
 ^C&lt;br /&gt;
&lt;br /&gt;
It writes its findings to /usr/home/user/doswatch.log&lt;br /&gt;
&lt;br /&gt;
= backup1 =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine acts as the primary backup location for all VPS-based customers. No customer directly accesses this server to perform their backups. We also store cancelled customers on this server. &lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-8&lt;br /&gt;
* OS: Ubuntu 8.04.1 server x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.8, Pub IP: 69.55.230.11 (firewalled from all but JC infrastructure @ i2b)&lt;br /&gt;
* Hardware: 6 SATA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Single power supply. &lt;br /&gt;
* Drives: 4.5 TB (6 x 1TB) RAID5 array running on a 3ware 9650SE-8LPML (8-port) card&lt;br /&gt;
&lt;br /&gt;
== Services provided ==&lt;br /&gt;
* backup via rsync&lt;br /&gt;
* mysql - traffic data&lt;br /&gt;
* nfs server - for backups&lt;br /&gt;
* snmp client - for big brother&lt;br /&gt;
* bigbrother client&lt;br /&gt;
&lt;br /&gt;
== Usage and Notes ==&lt;br /&gt;
* all data is stored under /data&lt;br /&gt;
* virtually all jc infrastructure, and all VPS machines are setup to mount to backup1 via nfs (mountpoint: &amp;lt;tt&amp;gt;/backup1&amp;lt;/tt&amp;gt;), and they all have their ssh keys setup to allow passwordless rsync&#039;s&lt;br /&gt;
* each virt or jail backs up each evening to backup1. Each server has it&#039;s own directory (named for the server). Under those directories are 7 daily snapshots (0-6)&lt;br /&gt;
* at the time of writing, the mysql server running here is replicating from (slave to) the mysql instance on bwdb. Requests for bandwidth data usage for customers (coming from management, account manager, and accounting scripts running on mail) all direct towards the database &amp;quot;traffic&amp;quot; running on this server.&lt;br /&gt;
* cancelled customer systems are compressed and stored under &amp;lt;tt&amp;gt;/data/deprecated&amp;lt;/tt&amp;gt;&lt;br /&gt;
* archived bwdb2 flow files are stored under &amp;lt;tt&amp;gt;/data/bwdb2&amp;lt;/tt&amp;gt;&lt;br /&gt;
* critical files from backup2 are stored under &amp;lt;tt&amp;gt;/data/backup2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
00 5 * * * /usr/local/sbin/backupwatch.pl 2&amp;gt;&amp;amp;1 &amp;gt; /dev/null&lt;br /&gt;
35 5 * * * /usr/local/sbin/usage_check; /usr/local/sbin/snapshot_archive; /usr/local/sbin/snapshot_rotate  /data/backuplog.log&amp;lt;/pre&amp;gt;&lt;br /&gt;
this runs daily the scripts to report on how much disk space each customer system occupies and how long their backups took. Then it rotates backups for each system, removing the oldest backup. It will email support@johncompanies.com at it’s conclusion. This email can be deleted, however note when it begins to take significantly longer to complete, ie runs past 2200 pm – this usually indicates a problem on the backup server.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;10,25,40,55 * * * * /usr/local/sbin/processsql.pl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
this processes prepared sql command files sent from/by bwdb2 (@ i2b) and imports them into the traffic database.&lt;br /&gt;
&amp;lt;pre&amp;gt;0 0 * * * /usr/local/sbin/3wraidchk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
checks the health of the RAID array&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Free_up_space_on_backup1|Remove old backups]]&lt;br /&gt;
*[[Routine_Maintenance#3ware|Check on auto-verify]]&lt;br /&gt;
&lt;br /&gt;
== build ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Setup raid5 with a boot vol of 12G 5.45tb &lt;br /&gt;
12G boot&lt;br /&gt;
4664 GB &lt;br /&gt;
&lt;br /&gt;
Install ubuntu 8.04&lt;br /&gt;
&lt;br /&gt;
Swap 4G&lt;br /&gt;
&lt;br /&gt;
Don’t format data drive&lt;br /&gt;
&lt;br /&gt;
http://www.unixgods.org/~tilo/linux_larger_2TB.html&lt;br /&gt;
&lt;br /&gt;
parted /dev/sdb&lt;br /&gt;
print&lt;br /&gt;
mklabel gpt&lt;br /&gt;
print&lt;br /&gt;
&lt;br /&gt;
#Disk /dev/sdb: 4987GB&lt;br /&gt;
#Sector size (logical/physical): 512B/512B&lt;br /&gt;
#Partition Table: gpt&lt;br /&gt;
&lt;br /&gt;
#Number  Start  End  Size  File system  Name  Flags&lt;br /&gt;
&lt;br /&gt;
mkpart primary ext3 0 4987GB&lt;br /&gt;
print&lt;br /&gt;
&lt;br /&gt;
#Disk /dev/sdb: 5987GB&lt;br /&gt;
#Sector size (logical/physical): 512B/512B&lt;br /&gt;
#Partition Table: gpt&lt;br /&gt;
&lt;br /&gt;
#Number  Start   End     Size    File system  Name     Flags&lt;br /&gt;
# 1      17.4kB  4987GB  4987GB               primary&lt;br /&gt;
&lt;br /&gt;
quit&lt;br /&gt;
&lt;br /&gt;
mkfs.ext3 /dev/sdb1&lt;br /&gt;
#mke2fs 1.40.8 (13-Mar-2008)&lt;br /&gt;
#Filesystem label=&lt;br /&gt;
#OS type: Linux&lt;br /&gt;
#Block size=4096 (log=2)&lt;br /&gt;
#Fragment size=4096 (log=2)&lt;br /&gt;
#304390144 inodes, 1217544183 blocks&lt;br /&gt;
#60877209 blocks (5.00%) reserved for the super user&lt;br /&gt;
#First data block=0&lt;br /&gt;
#Maximum filesystem blocks=0&lt;br /&gt;
#37157 block groups&lt;br /&gt;
#32768 blocks per group, 32768 fragments per group&lt;br /&gt;
#8192 inodes per group&lt;br /&gt;
#Superblock backups stored on blocks:&lt;br /&gt;
#        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,&lt;br /&gt;
#        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,&lt;br /&gt;
#        102400000, 214990848, 512000000, 550731776, 644972544&lt;br /&gt;
#&lt;br /&gt;
#Writing inode tables:   967/37157&lt;br /&gt;
&lt;br /&gt;
mkdir /data&lt;br /&gt;
&lt;br /&gt;
#root@backup1:~# df -h&lt;br /&gt;
#Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
#/dev/sda2             8.3G  540M  7.3G   7% /&lt;br /&gt;
#varrun               1013M   40K 1013M   1% /var/run&lt;br /&gt;
#varlock              1013M     0 1013M   0% /var/lock&lt;br /&gt;
#udev                 1013M   56K 1013M   1% /dev&lt;br /&gt;
#devshm               1013M     0 1013M   0% /dev/shm&lt;br /&gt;
#/dev/sdb1             4.5T  192M  4.3T   1% /data&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get upgrade&lt;br /&gt;
apt-get install snmp snmpd ntp nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;\&amp;quot;\e[5~\&amp;quot;: history-search-backward&amp;quot; &amp;gt;&amp;gt; ~/.inputrc&lt;br /&gt;
echo &amp;quot;\&amp;quot;\e[6~\&amp;quot;: history-search-forward&amp;quot; &amp;gt;&amp;gt; ~/.inputrc&lt;br /&gt;
&lt;br /&gt;
vi /etc/ntp.conf&lt;br /&gt;
server 10.1.4.5&lt;br /&gt;
&lt;br /&gt;
scp root@10.1.4.3:/root/.ssh/authorized_keys /root/.ssh/&lt;br /&gt;
cd /root/&lt;br /&gt;
ssh-keygen -t dsa&lt;br /&gt;
echo &amp;quot;10.1.4.3        backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
&lt;br /&gt;
cat .ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
ssh backup2&lt;br /&gt;
&lt;br /&gt;
vi /root/.bashrc&lt;br /&gt;
export PS1=&amp;quot;[\u@\h \w]# &amp;quot;&lt;br /&gt;
alias h=&#039;history&#039;&lt;br /&gt;
alias vi=&#039;vim&#039;&lt;br /&gt;
alias j=&#039;jobs&#039;&lt;br /&gt;
export PS1=&amp;quot;[\u@\h \w]# &amp;quot;&lt;br /&gt;
alias dr=&#039;screen -dr&#039;&lt;br /&gt;
export EDITOR=vim&lt;br /&gt;
export GREP_OPTIONS=&#039;--color=auto&#039;&lt;br /&gt;
export HISTFILESIZE=1000&lt;br /&gt;
&lt;br /&gt;
source /root/.bashrc&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;# ttyS0 - getty &lt;br /&gt;
# &lt;br /&gt;
# This service maintains a getty on ttyS0 from the point the system is&lt;br /&gt;
# started until it is shut down again.&lt;br /&gt;
&lt;br /&gt;
start on runlevel 2&lt;br /&gt;
start on runlevel 3&lt;br /&gt;
start on runlevel 4&lt;br /&gt;
start on runlevel 5&lt;br /&gt;
&lt;br /&gt;
stop on runlevel 0&lt;br /&gt;
stop on runlevel 1&lt;br /&gt;
stop on runlevel 6&lt;br /&gt;
&lt;br /&gt;
respawn&lt;br /&gt;
exec /sbin/getty 38400 ttyS0&amp;quot; &amp;gt; /etc/event.d/ttyS0 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vi /boot/grub/menu.lst&lt;br /&gt;
&lt;br /&gt;
serial --unit=0 --speed=38400 --word=8 --parity=no --stop=1&lt;br /&gt;
terminal --timeout=15 serial console&lt;br /&gt;
&lt;br /&gt;
append to kernel lines: &lt;br /&gt;
console=tty0 console=ttyS0,38400n8&lt;br /&gt;
&lt;br /&gt;
show menu:&lt;br /&gt;
#hiddenmenu&lt;br /&gt;
&lt;br /&gt;
echo &#039;rocommunity  jcread 10.1.4.5&lt;br /&gt;
rocommunity  jcread 10.1.4.3&lt;br /&gt;
agentaddress 10.1.4.8:161&#039; &amp;gt; /etc/snmp/snmpd.conf&lt;br /&gt;
&lt;br /&gt;
# to see which iface it is, on backup2:&lt;br /&gt;
&lt;br /&gt;
snmpwalk -v 1 -c jcread 10.1.4.8 interface&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;bb:x:1984:1984:Big Brother:/home/bb:/bin/bash&amp;quot; &amp;gt;&amp;gt; /etc/passwd&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;bb:x:1984:&amp;quot; &amp;gt;&amp;gt; /etc/group&lt;br /&gt;
&lt;br /&gt;
pwconv&lt;br /&gt;
&lt;br /&gt;
mkdir /home/bb&lt;br /&gt;
chown bb.bb /home/bb&lt;br /&gt;
&lt;br /&gt;
cd ~bb&lt;br /&gt;
scp backup2:/mnt/data4/build/bb/bb-linux.tar .&lt;br /&gt;
&lt;br /&gt;
tar xf bb-linux.tar&lt;br /&gt;
&lt;br /&gt;
cd /home/bb/bbc1.9e-btf/etc&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;10.1.4.5 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
echo &amp;quot;10.1.4.8 backup1.johncompanies.com # ssh&amp;quot; &amp;gt;&amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;/:90:95&lt;br /&gt;
/var:90:95&lt;br /&gt;
/data:85:99&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-dftab&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vi /home/bb/bbc1.9e-btf/bin/bb-disk.sh&lt;br /&gt;
(remove all | SORT xxxx)&lt;br /&gt;
&lt;br /&gt;
chmod +r /var/log/messages&lt;br /&gt;
&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
#(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
#(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&lt;br /&gt;
&lt;br /&gt;
#make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.local&lt;br /&gt;
su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &#039;/data 10.1.4.0/24(rw, no_root_squash,async,no_subtree_check)&#039; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
&lt;br /&gt;
/etc/init.d/nfs-kernel-server restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &#039;chmod o+r /var/log/messages&#039; &amp;gt;&amp;gt; /etc/cron.weekly/sysklogd&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &#039;10.1.4.8                backup1&#039; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &#039;/dev/sdb1	/data  ext3  rw,noatime  0  0&#039; &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
to install digi drivers:&lt;br /&gt;
&lt;br /&gt;
wget http://ftp1.digi.com/support/driver/40002086_n.tgz&lt;br /&gt;
apt-get install linux-image-2.6.24-19-server&lt;br /&gt;
apt-get install linux-source-2.6.24 (not needed?)&lt;br /&gt;
apt-get install linux-headers-2.6.24-19-server &lt;br /&gt;
apt-get install make&lt;br /&gt;
apt-get install gcc&lt;br /&gt;
apt-get install g++	&lt;br /&gt;
apt-get install libncurses5-dev&lt;br /&gt;
apt-get install expect&lt;br /&gt;
apt-get install libdbi-perl libdate-calc-perl libdbd-mysql-perl&lt;br /&gt;
&lt;br /&gt;
cd /usr/src; ln -s linux-headers-2.6.24-19-server linux&lt;br /&gt;
./configure&lt;br /&gt;
make all&lt;br /&gt;
make install&lt;br /&gt;
make postinstall&lt;br /&gt;
&lt;br /&gt;
/usr/bin/dgrp_cfg_node -v -v init el 65.116.11.2 8&lt;br /&gt;
&lt;br /&gt;
apt-get install mysql&lt;br /&gt;
&lt;br /&gt;
mkdir /data/mysql&lt;br /&gt;
chown mysql:mysql /data/mysql&lt;br /&gt;
/etc/init.d/mysql stop&lt;br /&gt;
mv /var/lib/mysql/* /data/mysql/&lt;br /&gt;
mv /data/mysql/ib_* /var/lib/mysql/&lt;br /&gt;
vi /etc/mysql/my.cnf&lt;br /&gt;
(change datadir to /data/mysql)&lt;br /&gt;
vi /etc/apparmor.d/usr.sbin.mysqld&lt;br /&gt;
add:&lt;br /&gt;
  /data/mysql/ r,&lt;br /&gt;
  /data/mysql/** rwk,&lt;br /&gt;
Comment out:&lt;br /&gt;
#  /var/lib/mysql/ r,&lt;br /&gt;
#  /var/lib/mysql/** rwk,&lt;br /&gt;
&lt;br /&gt;
/etc/init.d/apparmor restart&lt;br /&gt;
/etc/init.d/mysql start&lt;br /&gt;
&lt;br /&gt;
tw_cli /c0/u0 set ignoreECC=on&lt;br /&gt;
tw_cli /c0/u0 set storsave=balance&lt;br /&gt;
tw_cli /c0/u0 set cache=on&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
0 0 * * * /usr/local/sbin/3wraidchk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= backup2 =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine is used for archiving data and is a backup server for colo customers. It was the former primary backup location for all VPS-based customers before backup1 was installed. Only dedicated customers directly accesses this server to perform their backups. NOTE: power button is broken, so the reset button (paper clip) was rewired to be the power button.&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 6.1 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.3, Pub IP: 69.55.230.10 (firewalled from all but JC infrastructure @ i2b)&lt;br /&gt;
* Hardware: 16 IDE drive bays (4 columns of 4, drive 0-0 top left, drive 0-1 just to the right TODO) all hot-swap. Triple power supply. &lt;br /&gt;
* Drives: &lt;br /&gt;
**3ware 7500-8:&lt;br /&gt;
***200 GB JBOD (1 x 200G) labeled 0-0&lt;br /&gt;
***500 GB RAID5 (3 x 250G) 0-1 thru 0-3&lt;br /&gt;
***700 GB RAID5 (4 x 250G) 0-4 thru 0-7&lt;br /&gt;
**3ware 7500-8:&lt;br /&gt;
***700 GB RAID5 (4 x 250G) 1-0 thru 1-3&lt;br /&gt;
***700 GB RAID5 (4 x 250G) 1-4 thru 1-7&lt;br /&gt;
&lt;br /&gt;
All drives MUST be western digital IDE drives. Other brands will not fit.&lt;br /&gt;
&lt;br /&gt;
In case of an outage, nfs will hang on all connected servers until the nfs service returns. If you can&#039;t get backup2 back online, you can get nfs running elsewhere and fake backup2&#039;s MAC&#039;s: priv: 00:0e:0c:59:c1:a6, pub: 00:07:e9:5b:c6:45&lt;br /&gt;
&lt;br /&gt;
To configure:&lt;br /&gt;
 ifconfig fxp0 link 00:90:27:f9:0a:d9&lt;br /&gt;
&lt;br /&gt;
== Services provided ==&lt;br /&gt;
* backup via rsync and nfs&lt;br /&gt;
* samba&lt;br /&gt;
* nfs&lt;br /&gt;
* snmp&lt;br /&gt;
* bigbrother&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
* all data is stored under 4 mount points, corresponding to the 4 large RAID5 arrays: &amp;lt;tt&amp;gt;/mnt/data1 /mnt/data2 /mnt/data3 /mnt/data4&amp;lt;/tt&amp;gt;&lt;br /&gt;
* iso images provided for customers wanting to mount an ISO as a CDROM via the IPKVM are provided via samba on this server. Images live under &amp;lt;tt&amp;gt;/mnt/data2/iso&amp;lt;/tt&amp;gt;&lt;br /&gt;
* this used to be our primary backup server so you will see old backups from virt and jails around- missing customer data though, just the machine&#039;s data&lt;br /&gt;
* this server serves as an archive for exported db data from bwdb and old flow files.&lt;br /&gt;
* isys backs up here&lt;br /&gt;
* customers are nfs-moutned under /mnt/data3/customers as file-backed md devices&lt;br /&gt;
* in &amp;lt;tt&amp;gt;/mnt/data4&amp;lt;/tt&amp;gt; there are lots of useful things used for building our vps servers, customer servers, and management scripts:&lt;br /&gt;
** &amp;lt;tt&amp;gt;/bin&amp;lt;/tt&amp;gt;: the master repository of scripts and custom binaries we use on jails and virts. Each night every virt and jail rsync&#039;s what&#039;s in here to update the local files. So any global updates to scripts would need to be made here (or will be overwritten with what&#039;s in here)&lt;br /&gt;
** &amp;lt;tt&amp;gt;/build&amp;lt;/tt&amp;gt;: files we use for setting up big brother, 3ware cli and scripts for colo&#039;s, vzcp customized setup files and so on&lt;br /&gt;
** &amp;lt;tt&amp;gt;/vzrpms&amp;lt;/tt&amp;gt;: contains the OS templates for many-to-most of the OS&#039;s we offer on vz systems&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
* backs itself up nightly to nfs-mounted backup1 (mountpoint: &amp;lt;tt&amp;gt;/backup2&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#3ware|Check on health]]&lt;br /&gt;
&lt;br /&gt;
= backup3 =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine is used for archiving data, is a backup server for colo customers, runs a samba server to make available iso&#039;s to the IPKVMs, and allows us to connect to the digi serial multiplexer at i2b. Only dedicated customers directly accesses this server to perform their backups. &lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab 6&lt;br /&gt;
* OS: Ubuntu 10.04.1 server amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.2.3, Pub IPs: 69.55.229.4 AND 69.55.231.2&lt;br /&gt;
* Hardware: 16 drive SATA bays (4 columns of 4, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: 5 TB (6 x 1TB) RAID5 array running on an Areca Technology Corp. ARC-1160 16-Port&lt;br /&gt;
&lt;br /&gt;
== Services provided ==&lt;br /&gt;
* backup via rsync and nfs&lt;br /&gt;
* samba&lt;br /&gt;
* nfs&lt;br /&gt;
* digi realport&lt;br /&gt;
* snmp&lt;br /&gt;
* bigbrother&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
* all data is stored under /data&lt;br /&gt;
* iso images provided for customers wanting to mount an ISO as a CDROM via the IPKVM are provided via samba on this server. Images live under &amp;lt;tt&amp;gt;/data/iso&amp;lt;/tt&amp;gt;&lt;br /&gt;
* this server serves as an archive for exported db data from bwdb and old flow files.&lt;br /&gt;
* inftrastructure machines at i2b back up here&lt;br /&gt;
* customers are nfs-moutned under /data/customers as file-backed loopback devices&lt;br /&gt;
&lt;br /&gt;
== management scripts ==&lt;br /&gt;
* mkbackups&lt;br /&gt;
&lt;br /&gt;
mkbackup &amp;lt;cid&amp;gt; GB &amp;lt;ip&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
 0 0 * * * /usr/local/sbin/arecaraidchk&lt;br /&gt;
RAID checks&lt;br /&gt;
&lt;br /&gt;
 35 4 * * * /usr/local/sbin/snapshot_archive&lt;br /&gt;
Rotate daily snapshots for infrastructure machine backups&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Areca|Check on RAID health]]&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&lt;br /&gt;
=== BIOS Config ===&lt;br /&gt;
disable quiet boot&lt;br /&gt;
&lt;br /&gt;
set to last state after power loss&lt;br /&gt;
&lt;br /&gt;
set date/time to GMT&lt;br /&gt;
&lt;br /&gt;
enable serial console output (baud rate 115200)&lt;br /&gt;
&lt;br /&gt;
=== Install OS ===&lt;br /&gt;
&amp;lt;pre&amp;gt;Ubuntu 10.04.1 amd64 (couldn&#039;t get 12.04 to load cause the H/W was incompat)&lt;br /&gt;
10G / ext3&lt;br /&gt;
2G swap&lt;br /&gt;
~ /data ext4&lt;br /&gt;
&lt;br /&gt;
Install packages:&lt;br /&gt;
openssh&lt;br /&gt;
samba&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== DNS and private IP ===&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;nameserver 69.55.225.225&amp;quot; &amp;gt;&amp;gt; /etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
Add a 2nd IP to eth0 and setup priv net&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/network/interfaces&lt;br /&gt;
&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.229.4&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.229.0&lt;br /&gt;
        broadcast 69.55.229.255&lt;br /&gt;
        gateway 69.55.229.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55.229.3 66.181.0.2&lt;br /&gt;
        dns-search johncompanies.com&lt;br /&gt;
&lt;br /&gt;
auto eth0:1&lt;br /&gt;
iface eth0:1 inet static&lt;br /&gt;
        address 69.55.231.2&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.231.0&lt;br /&gt;
        broadcast 69.55.231.255&lt;br /&gt;
&lt;br /&gt;
auto eth1&lt;br /&gt;
iface eth1 inet static&lt;br /&gt;
        address 10.1.2.3&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 10.1.2.0&lt;br /&gt;
        broadcast 10.1.2.255&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Install packages ===&lt;br /&gt;
&amp;lt;pre&amp;gt;apt-get update&lt;br /&gt;
apt-get upgrade&lt;br /&gt;
apt-get install gcc&lt;br /&gt;
apt-get install libssl-dev&lt;br /&gt;
apt-get install libncurses5-dev&lt;br /&gt;
apt-get install cu&lt;br /&gt;
apt-get install unzip&lt;br /&gt;
apt-get install snmp snmpd ntp nfs-kernel-server&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== tweak grub, enable serial ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/default/grub&lt;br /&gt;
#GRUB_HIDDEN_TIMEOUT=0&lt;br /&gt;
GRUB_CMDLINE_LINUX_DEFAULT=&amp;quot;max_loop=64&amp;quot;&lt;br /&gt;
GRUB_CMDLINE_LINUX=&amp;quot;console=tty0 console=ttyS0&amp;quot;&lt;br /&gt;
update-grub&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;start on stopped rc RUNLEVEL=[2345]&lt;br /&gt;
stop on runlevel [!2345]&lt;br /&gt;
respawn&lt;br /&gt;
exec /sbin/getty -L ttyS0 38400 vt102&amp;quot; &amp;gt; /etc/init/ttyS0.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== install realport (digi) driver ===&lt;br /&gt;
&lt;br /&gt;
give the digi an ip with DgIpServ.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/&lt;br /&gt;
wget ftp://ftp1.digi.com/support/beta/linux/dgrp/dgrp-1.9.tgz&lt;br /&gt;
tar xzf dgrp-1.9.tgz &lt;br /&gt;
cd dgrp-1.9/&lt;br /&gt;
./configure&lt;br /&gt;
make&lt;br /&gt;
make install&lt;br /&gt;
make postinstall&lt;br /&gt;
update-rc.d dgrp_daemon defaults&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
configure ports:&lt;br /&gt;
 dgrp_cfg_node init el 10.1.2.10 16&lt;br /&gt;
&lt;br /&gt;
try connecting with:&lt;br /&gt;
 cu -l /dev/ttyel00 -s 38400&lt;br /&gt;
&lt;br /&gt;
=== shell, ntp, ssh key, hosts ===&lt;br /&gt;
&lt;br /&gt;
Shell autocompletion search:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;\&amp;quot;\e[5~\&amp;quot;: history-search-backward&amp;quot; &amp;gt;&amp;gt; ~/.inputrc&lt;br /&gt;
echo &amp;quot;\&amp;quot;\e[6~\&amp;quot;: history-search-forward&amp;quot; &amp;gt;&amp;gt; ~/.inputrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setup ntp:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ntp.conf&lt;br /&gt;
server 10.1.2.1&lt;br /&gt;
server ntp.ubuntu.com&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Generate ssh keys:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /root/&lt;br /&gt;
ssh-keygen -t dsa&amp;lt;/pre&amp;gt;&lt;br /&gt;
Defaults, no password&lt;br /&gt;
&lt;br /&gt;
Setup hosts:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.4 bwdb2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy keys to servers where we need passwordless login:&lt;br /&gt;
&amp;lt;pre&amp;gt;cat .ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat .ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039;&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Setup shell:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /root/.bashrc&lt;br /&gt;
(add to bottom)&lt;br /&gt;
alias h=&#039;history&#039;&lt;br /&gt;
alias vi=&#039;vim&#039;&lt;br /&gt;
alias j=&#039;jobs&#039;&lt;br /&gt;
export PS1=&amp;quot;[\u@\h \w]# &amp;quot;&lt;br /&gt;
alias dr=&#039;screen -dr&#039;&lt;br /&gt;
export EDITOR=vim&lt;br /&gt;
export GREP_OPTIONS=&#039;--color=auto&#039;&lt;br /&gt;
export HISTFILESIZE=1000&lt;br /&gt;
&lt;br /&gt;
alias tip-switch-p20=&#039;cu -l ttyel00 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p21=&#039;cu -l ttyel15 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p22=&#039;cu -l ttyel14 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p23=&#039;cu -l ttyel05 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p24=&#039;cu -l ttyel06 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p25=&#039;cu -l ttyel09 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p26=&#039;cu -l ttyel07 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p27=&#039;cu -l ttyel08 -s 9600&#039;&lt;br /&gt;
alias tip-firewall2=&#039;cu -l ttyel01 -s 115200&#039;&lt;br /&gt;
alias tip-nat2=&#039;cu -l /dev/ttyel02 -s 115200&#039;&lt;br /&gt;
alias tip-backup3=&#039;cu -l ttyel04 -s 38400&#039;&lt;br /&gt;
alias tip-bwdb2=&#039;cu -l ttyel03 -s 115200&#039;&lt;br /&gt;
alias tip-backup4=&#039;cu -l ttyel13 -s 115200&#039;&lt;br /&gt;
alias tip-jail3=&#039;cu -l ttyel11 -s 115200&#039;&lt;br /&gt;
&lt;br /&gt;
Load new shell:&lt;br /&gt;
 source /root/.bashrc&lt;br /&gt;
&lt;br /&gt;
Setup snmpd (this is only valid for a server at castle):&lt;br /&gt;
echo &#039;rocommunity  jcread 10.1.4.5&lt;br /&gt;
rocommunity  jcread 10.1.4.3&lt;br /&gt;
agentaddress 10.1.4.8:161&#039; &amp;gt; /etc/snmp/snmpd.conf&lt;br /&gt;
&lt;br /&gt;
to see which iface it is, on backup2:&lt;br /&gt;
&lt;br /&gt;
snmpwalk -v 1 -c jcread 10.1.4.8 interface&lt;br /&gt;
&lt;br /&gt;
=== nfs ===&lt;br /&gt;
&lt;br /&gt;
Allow mounts from private net:&lt;br /&gt;
 echo &#039;/data 10.1.2.0/24(rw,no_root_squash,async,no_subtree_check)&#039; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
&lt;br /&gt;
Restart nfsd:&lt;br /&gt;
 /etc/init.d/nfs-kernel-server restart&lt;br /&gt;
&lt;br /&gt;
=== bb ===&lt;br /&gt;
&lt;br /&gt;
Add user, group:&lt;br /&gt;
 echo &amp;quot;bb:x:1984:1984:Big Brother:/home/bb:/bin/bash&amp;quot; &amp;gt;&amp;gt; /etc/passwd&lt;br /&gt;
 echo &amp;quot;bb:x:1984:&amp;quot; &amp;gt;&amp;gt; /etc/group&lt;br /&gt;
 pwconv&lt;br /&gt;
&lt;br /&gt;
Create home:&lt;br /&gt;
 mkdir /home/bb&lt;br /&gt;
 chown bb.bb /home/bb&lt;br /&gt;
 cd ~bb&lt;br /&gt;
&lt;br /&gt;
Copy over and install files:&lt;br /&gt;
&amp;lt;pre&amp;gt;scp backup2:/mnt/data4/build/bb/bb-linux.tar .&lt;br /&gt;
tar xf bb-linux.tar&lt;br /&gt;
cd /home/bb/bbc1.9e-btf/etc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configure main bb server:&lt;br /&gt;
 echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
 echo &amp;quot;10.1.2.3 backup3.johncompanies.com # ssh&amp;quot; &amp;gt;&amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
Configure low disk alerts:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;/:90:95&lt;br /&gt;
/var:90:95&lt;br /&gt;
/data:85:99&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-dftab&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 vi /home/bb/bbc1.9e-btf/bin/bb-disk.sh&lt;br /&gt;
(remove all | SORT xxxx since SORT is broken)&lt;br /&gt;
&lt;br /&gt;
 chmod +r /var/log/messages&lt;br /&gt;
&lt;br /&gt;
 ./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
 ./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
&amp;lt;pre&amp;gt;cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT&amp;lt;/pre&amp;gt; &lt;br /&gt;
(look for errors)&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/rc.local&lt;br /&gt;
su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(before the exit 0)&lt;br /&gt;
&lt;br /&gt;
 echo &#039;chmod o+r /var/log/messages&#039; &amp;gt;&amp;gt; /etc/cron.weekly/sysklogd&lt;br /&gt;
&lt;br /&gt;
Add f/w rule:&lt;br /&gt;
 ipfw add 00096 allow ip from { 69.55.229.4 or 69.55.229.3 } to 69.55.230.2 1984&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi ~bb/bbc1.9e-btf/etc/bbdef-client.sh&lt;br /&gt;
DFWARN=199&lt;br /&gt;
DFPANIC=199&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== raid check ===&lt;br /&gt;
&lt;br /&gt;
==== 3ware ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp backup1:/usr/local/sbin/tw_cli /usr/local/sbin/tw_cli&lt;br /&gt;
scp backup1:/usr/local/sbin/checkraid.sh /usr/local/sbin/checkraid.sh&lt;br /&gt;
scp backup1:/usr/local/sbin/3wraidchk /usr/local/sbin/3wraidchk&lt;br /&gt;
vi /usr/local/sbin/checkraid.sh&lt;br /&gt;
:%s/c0/c2/g&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
0 0 * * * /usr/local/sbin/3wraidchk&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== areca ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /tmp&lt;br /&gt;
wget http://www.areca.us/support/s_linux/cli/linuxcli_V1.10.0_120815.zip&lt;br /&gt;
unzip linuxcli_V1.10.0_120815.zip&lt;br /&gt;
cp linuxcli_V1.10.0_120815/x86_64/cli64 /usr/local/sbin/&lt;br /&gt;
chmod 0700 /usr/local/sbin/cli64&lt;br /&gt;
cli64 rsf info&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;scp backup2:/data4/bin/arecaraidchk /usr/local/sbin&lt;br /&gt;
scp backup1:/usr/local/sbin/Sendmail.pm /usr/local/sbin&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
0 0 * * * /usr/local/sbin/arecaraidchk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /root/verify.sh&lt;br /&gt;
cli64 vsf info&lt;br /&gt;
cli64 rsf info&lt;br /&gt;
cli64 disk info&lt;br /&gt;
cli64 event info&lt;br /&gt;
echo press enter when ready to run verify ; read x&lt;br /&gt;
&lt;br /&gt;
cli64 vsf check vol=1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== misc binaries ===&lt;br /&gt;
&lt;br /&gt;
 scp backup1:/usr/local/sbin/snapshot_archive /usr/local/sbin/snapshot_archive&lt;br /&gt;
 vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
(remove entries)&lt;br /&gt;
&lt;br /&gt;
 crontab -e&lt;br /&gt;
 35 4 * * * /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
 scp backup1:/usr/local/sbin/pagedave /usr/local/sbin/pagedave&lt;br /&gt;
 scp backup1:/usr/local/sbin/taskdone /usr/local/sbin/taskdone&lt;br /&gt;
&lt;br /&gt;
Since installing /bin/mail requires all sorts of packages (lame) we write a simple one here...which can only email johncompanies.com addr&#039;s unless you add relaying for this host:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /bin/mail&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
use strict;&lt;br /&gt;
use warnings;&lt;br /&gt;
&lt;br /&gt;
use lib &#039;/usr/local/sbin&#039;;&lt;br /&gt;
use Sendmail qw(sendmail);&lt;br /&gt;
&lt;br /&gt;
my $sub = $ARGV[1];&lt;br /&gt;
my $to = $ARGV[2];&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
my %mail = (&lt;br /&gt;
   To      =&amp;gt; $to,&lt;br /&gt;
   From    =&amp;gt; $to,&lt;br /&gt;
   Subject =&amp;gt; $sub,&lt;br /&gt;
   Message =&amp;gt; &#039;&#039;,&lt;br /&gt;
   smtp    =&amp;gt; &#039;mail.johncompanies.com&#039;&lt;br /&gt;
);&lt;br /&gt;
sendmail(%mail) || print &amp;quot;Error: $Sendmail::error&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /bin/mail&lt;br /&gt;
&lt;br /&gt;
=== mkbackup ===&lt;br /&gt;
&lt;br /&gt;
 mkdir /data/customers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/local/sbin/mkbackup&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
&lt;br /&gt;
if test $1; then&lt;br /&gt;
  cid=$1&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;ERROR: Usage: mkbackup cid GB ip  Terminating.&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if test $2; then&lt;br /&gt;
  gb=$2&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;ERROR: Usage: mkbackup cid GB ip  Terminating.&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if test $3; then&lt;br /&gt;
  ip=$3&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;ERROR: Usage: mkbackup cid GB ip  Terminating.&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if test -e /data/customers/${cid}-file; then&lt;br /&gt;
  echo &amp;quot;ERROR: /data/customers/${cid}-file exists&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;touch /data/customers/${cid}-file&amp;quot;&lt;br /&gt;
  touch /data/customers/${cid}-file&lt;br /&gt;
  count=`echo $gb|awk &#039;{print $1*1000}&#039;`&lt;br /&gt;
  echo &amp;quot;dd if=/dev/zero of=/data/customers/${cid}-file bs=1024K count=$count&amp;quot;&lt;br /&gt;
  dd if=/dev/zero of=/data/customers/${cid}-file bs=1024K count=$count&lt;br /&gt;
  echo &amp;quot;/sbin/mkfs -t ext3 -F -j -q /data/customers/${cid}-file&amp;quot;&lt;br /&gt;
  /sbin/mkfs -t ext3 -F -j -q /data/customers/${cid}-file&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if test -e /data/customers/$cid; then&lt;br /&gt;
  echo &amp;quot;ERROR: /data/customers/$cid exists&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;mkdir /data/customers/${cid}&amp;quot;&lt;br /&gt;
  mkdir /data/customers/${cid}&lt;br /&gt;
  echo &amp;quot;mount -o loop /data/customers/${cid}-file /data/customers/$cid&amp;quot;&lt;br /&gt;
  mount -o loop /data/customers/${cid}-file /data/customers/$cid&lt;br /&gt;
  df -h /data/customers/$cid&lt;br /&gt;
&lt;br /&gt;
  echo &amp;quot;fsck -y /data/customers/${cid}-file&amp;quot; &amp;gt;&amp;gt; /etc/nfs_backup_mounts.sh&lt;br /&gt;
  echo &amp;quot;mount -o loop /data/customers/${cid}-file /data/customers/$cid&amp;quot; &amp;gt;&amp;gt; /etc/nfs_backup_mounts.sh&lt;br /&gt;
  echo &amp;quot;&amp;quot; &amp;gt;&amp;gt; /etc/nfs_backup_mounts.sh&lt;br /&gt;
&lt;br /&gt;
  echo &amp;quot;/data/customers/$cid $ip/32(rw,no_root_squash,async,no_subtree_check)&amp;quot; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
  /etc/init.d/nfs-kernel-server restart&lt;br /&gt;
  tail /var/log/messages&lt;br /&gt;
fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/local/sbin/mkbackup&lt;br /&gt;
&lt;br /&gt;
 vi /etc/rc.local&lt;br /&gt;
add:&lt;br /&gt;
 /etc/nfs_backup_mounts.sh&lt;br /&gt;
&lt;br /&gt;
=== samba ===&lt;br /&gt;
&lt;br /&gt;
 apt-get install samba&lt;br /&gt;
&lt;br /&gt;
 vi /etc/samba/smb.conf&lt;br /&gt;
&lt;br /&gt;
; comment out any mounts, add:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[data]&lt;br /&gt;
   read only = yes&lt;br /&gt;
   locking = no&lt;br /&gt;
   path = /data/iso&lt;br /&gt;
   guest ok = yes&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/smbd restart&lt;br /&gt;
&lt;br /&gt;
 mkdir /data/iso&lt;br /&gt;
&lt;br /&gt;
Bring over some stuff from backup2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /data/iso&lt;br /&gt;
scp backup2:/d2/iso/3wfirmware.iso .&lt;br /&gt;
scp backup2:/d2/iso/MD5SUMS .&lt;br /&gt;
scp backup2:/d2/iso/bootimg.iso .&lt;br /&gt;
scp backup2:/d2/iso/systemrescuecd-x86-0.2.19.iso .&lt;br /&gt;
scp backup2:/d2/iso/win98bootcd.iso .&lt;br /&gt;
scp backup2:/d2/iso/acronis_bootdisk.iso .&lt;br /&gt;
scp backup2:/d2/iso/memtest86-3.2.iso .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Moving from one server to another ===&lt;br /&gt;
&lt;br /&gt;
Here are the steps you would take to move settings and data from one server to a new backup server:&lt;br /&gt;
&lt;br /&gt;
* rsync over all /data/customers (we do this cause if we didn&#039;t use *-file it would copy over the files AND the data in the mountpoint)&lt;br /&gt;
 rsync -av --progress --ignore-times *-file root@10.1.2.33:/data/customers/&lt;br /&gt;
after umounting all the customers, copy over the (empty) directories separately:&lt;br /&gt;
 for f in `find .  -type d`; do rsync -av $f root@69.55.229.25:/data/customers; done&lt;br /&gt;
&lt;br /&gt;
* copy mount script&lt;br /&gt;
 [root@backup3 /data/customers]# scp /etc/nfs_backup_mounts.sh root@69.55.229.25:/etc/nfs_backup_mounts.sh&lt;br /&gt;
&lt;br /&gt;
* copy rc.local&lt;br /&gt;
 [root@backup3 /data/customers]# scp /etc/rc.local root@69.55.229.25:/etc/rc.local&lt;br /&gt;
&lt;br /&gt;
* copy /etc/exports&lt;br /&gt;
 [root@backup3 /data/customers]# scp /etc/exports root@69.55.229.25:/etc/exports&lt;br /&gt;
&lt;br /&gt;
* edit /etc/hostname on both machines (set current to oldbackup3)&lt;br /&gt;
&lt;br /&gt;
* edit /etc/network/interfaces (swap IPs). &lt;br /&gt;
&lt;br /&gt;
* stop mounts from mounting on old and new servers so it doesnt start with reboot right away:&lt;br /&gt;
 chmod 000 /etc/nfs_backup_mounts.sh&lt;br /&gt;
&lt;br /&gt;
* reboot both servers @ same time&lt;br /&gt;
&lt;br /&gt;
* check everything out&lt;br /&gt;
&lt;br /&gt;
* run /etc/nfs_backup_mounts.sh on new server&lt;br /&gt;
&lt;br /&gt;
* if switch port changed update mrtg to reflect correct port pub nic is on (on p20):&lt;br /&gt;
 vi /usr/local/www/mgmt/mrtg/mrtg1.cfg&lt;br /&gt;
&lt;br /&gt;
= backup4 =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine is used for archiving data, is a backup server for colo customers, runs a samba server to make available iso&#039;s to the IPKVMs.  Only FreeBSD virt customers directly accesses this server to perform their backups. &lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeNAS 9.3 (FreeBSD 9.3)&lt;br /&gt;
* Networking: Priv IP: 10.1.2.9/24 AND 10.1.7.9/24,  Pub IPs: 69.55.230.6/24 &lt;br /&gt;
* Hardware: JC-08014&lt;br /&gt;
            Intel S5000VSA Motherboard&lt;br /&gt;
            1 x Intel Xeon E5410  @ 2.33GHz CPU&lt;br /&gt;
            3ware 9690SA-8I RAID Card w BBU&lt;br /&gt;
            16GB RAM&lt;br /&gt;
            Dual power supply.&lt;br /&gt;
* Drives: 7 TB (6 x 2TB) ZFS RAIDZ2 array running on JBOD&lt;br /&gt;
       1 128 GB SSD system drive and 6 drive SATA bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap.&lt;br /&gt;
&lt;br /&gt;
* GUI management at http://backup4.johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== Services provided ==&lt;br /&gt;
* backup via rsync and nfs&lt;br /&gt;
* samba&lt;br /&gt;
* nfs&lt;br /&gt;
* snmp?&lt;br /&gt;
* bigbrother?&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
* all data is stored under /data&lt;br /&gt;
* iso images provided for customers wanting to mount an ISO as a CDROM via the IPKVM are provided via samba on this server. Images live under &amp;lt;tt&amp;gt;/data/iso&amp;lt;/tt&amp;gt;  ??&lt;br /&gt;
* this server serves as an archive for exported db data from bwdb and old flow files. ??&lt;br /&gt;
* customers are nfs-moutned under /data/users (/mnt/zfs/users) as zfs ?&lt;br /&gt;
&lt;br /&gt;
== management scripts ==&lt;br /&gt;
* mkbackups?&lt;br /&gt;
&lt;br /&gt;
mkbackup &amp;lt;cid&amp;gt; GB &amp;lt;ip&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
 0 0 * * * /usr/local/sbin/arecaraidchk  &lt;br /&gt;
RAID checks ?&lt;br /&gt;
&lt;br /&gt;
 35 4 * * * /usr/local/sbin/snapshot_archive&lt;br /&gt;
Rotate daily snapshots for infrastructure machine backups&lt;br /&gt;
&lt;br /&gt;
 00 15 * * * /usr/local/sbin/snapshot_rotate&lt;br /&gt;
Rotate daily snapshots for customer machine backups&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#A|Check on RAID health]]&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&lt;br /&gt;
= console = &lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This box&#039;s only purpose is to serve as a means to connect to the digi serial multiplexer boxes at castle. Connect to it using the blue (cisco) ribbon cable with the beige RJ-45 to serial connector, 9600 8N1.&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-8&lt;br /&gt;
* OS: SunOS 5.8 (solaris)&lt;br /&gt;
* Networking: Priv IP: 10.1.4.4&lt;br /&gt;
* Hardware: Sun Netra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To connect to consoles, ssh in as user &#039;console&#039; and use the &amp;lt;tt&amp;gt;tip&amp;lt;/tt&amp;gt; command to connect to devices listed in &amp;lt;tt&amp;gt;/etc/remote&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
i.e.&lt;br /&gt;
 tip switch-p1&lt;br /&gt;
 tip jail1&lt;br /&gt;
&lt;br /&gt;
== Configuring digi/ports == &lt;br /&gt;
&lt;br /&gt;
=== /etc/remote ===&lt;br /&gt;
This is where the configuration/mapping for ports and custom names which we use along with the tip command to connect to various ports on the digi switches.&lt;br /&gt;
&lt;br /&gt;
We have 2 digi&#039;s at castle we connect to:&lt;br /&gt;
&lt;br /&gt;
 #3-7 10.1.4.10&lt;br /&gt;
 virt15:dv=/dev/dty/CO001s:br#38400:el=^C^S^Q^U^D:ie=%$:oe=^D:&lt;br /&gt;
 virt13:dv=/dev/dty/CO002s:br#115200:el=^C^S^Q^U^D:ie=%$:oe=^D:hf:&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
 #3-6 10.1.4.11&lt;br /&gt;
 jail4:dv=/dev/dty/CP001s:br#9600:el=^C^S^Q^U^D:ie=%$:oe=^D:hf:&lt;br /&gt;
 jail16:dv=/dev/dty/CP002s:br#9600:el=^C^S^Q^U^D:ie=%$:oe=^D:hf:&lt;br /&gt;
&lt;br /&gt;
The only things you need to edit are the first part (i.e. &amp;lt;tt&amp;gt;jail4&amp;lt;/tt&amp;gt;) and the speed (i.e. &amp;lt;tt&amp;gt;9600&amp;lt;/tt&amp;gt;). You can decipher which port on the digi each line corresponds to by the &amp;lt;tt&amp;gt;CP001s or CO001s&amp;lt;/tt&amp;gt; (port 1 on digi1 and digi2), &amp;lt;tt&amp;gt;CP002s or CO002s&amp;lt;/tt&amp;gt; (port 2 on digi1 and digi2)&lt;br /&gt;
&lt;br /&gt;
=== drpadmin ===&lt;br /&gt;
The tool you use to configure a device to a digi box is drpadmin:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.03$ su&lt;br /&gt;
Password:&lt;br /&gt;
# drpadmin&lt;br /&gt;
&lt;br /&gt;
Please select an option (a)dd (d)elete (s)how (r)eset (q)uit : s&lt;br /&gt;
0       10.1.4.10       32      CO      771     never   1027&lt;br /&gt;
1       10.1.4.11       32      CP      771     never   1027&lt;br /&gt;
2       65.116.11.2     8       el      771     never   1027&lt;br /&gt;
&lt;br /&gt;
Please select an option (a)dd (d)elete (s)how (r)eset (q)uit :&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use those commands above to modify the devices available.&lt;br /&gt;
&lt;br /&gt;
== Switching IP/hostname ==&lt;br /&gt;
&lt;br /&gt;
Edit:&lt;br /&gt;
 /etc/defaultrouter&lt;br /&gt;
 /etc/hosts&lt;br /&gt;
 /etc/hostname.hme0&lt;br /&gt;
 /etc/nodename&lt;br /&gt;
 Maybe needed to run: # ifconfig hme0 10.1.4.4 up&lt;br /&gt;
&lt;br /&gt;
= devweb =&lt;br /&gt;
&lt;br /&gt;
We do web development on devweb.johncompanies.com&lt;br /&gt;
&lt;br /&gt;
Currently this is a jail running on jail17 / 69.55.230.8&lt;br /&gt;
&lt;br /&gt;
If the jail is restarted, you will need to manually restart the web service with:&lt;br /&gt;
 httpsdctl restart&lt;br /&gt;
&lt;br /&gt;
All website development work should be done here first. It works exactly like and is setup like our [[Management_System_/_Public_Website_/_Signup_/_Account_Manager|main site]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= firewall2 =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine is the primary firewall for the entire network at i2b.   firewall3 is a hot standby replacement for &lt;br /&gt;
firewall2.  Both firewall2 and firewall3 should not be connected at the same time since they use the same internal&lt;br /&gt;
and external IP addresses.&lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab 6&lt;br /&gt;
* OS: FreeBSD 6.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.2.2, Pub IPs: 66.181.18.3 (external), 69.55.229.1 &amp;amp; 69.55.231.1 (internal). It has 3 network connections (2 onboard, 1 PCI) connecting to the external, internal and private networks. TODO: describe NIC location/orientation&lt;br /&gt;
&lt;br /&gt;
* Hardware: 2 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: 73 GB (2 x 73GB) RAID1 array running on an LSI MegaRAID SCSI 320 PCI RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* firewall (ipfw)&lt;br /&gt;
* bigbrother for customer machines&lt;br /&gt;
&lt;br /&gt;
== Firewall Rule Configuration ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Firewall_Rule_Configuration|Firewall Rule Configuration]] for more discussion on how to actually manipulate firewall rules.&lt;br /&gt;
&lt;br /&gt;
== Disaster Recovery ==&lt;br /&gt;
&lt;br /&gt;
TODO: need backup f/w and instructions on how to move cables.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what you need to put in /etc/rc.conf to get a firewall going (as far as routes and IPs)&lt;br /&gt;
&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
Here&#039;s the config on the live firewall:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;66.181.18.2&amp;quot;&lt;br /&gt;
hostname=&amp;quot;firewall2.johncompanies.com&amp;quot;&lt;br /&gt;
ifconfig_bge0=&amp;quot;inet 66.181.18.3  netmask 255.255.255.224&amp;quot;&lt;br /&gt;
ifconfig_bge1=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias0=&amp;quot;inet 69.55.231.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias1=&amp;quot;inet 65.50.228.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias2=&amp;quot;inet 65.50.229.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias3=&amp;quot;inet 65.50.230.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias4=&amp;quot;inet 65.50.231.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias5=&amp;quot;inet 65.50.232.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias6=&amp;quot;inet 65.50.233.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias7=&amp;quot;inet 65.50.234.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias8=&amp;quot;inet 65.50.235.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
Backup to backup3&lt;br /&gt;
&lt;br /&gt;
 0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
Reset counters and remove pipe rules on the 1st of the month. Pay attention when setting up a rule as 3 4 5 (that&#039;s not a temporary traffic cap). &lt;br /&gt;
&lt;br /&gt;
 59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
 3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
Capture counts periodically&lt;br /&gt;
&lt;br /&gt;
 0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl;&lt;br /&gt;
This script will create &amp;lt;tt&amp;gt;/etc/firewall.sh&amp;lt;/tt&amp;gt; which contains all the firewall and pipe rules in place at the time the script was run.&lt;br /&gt;
&lt;br /&gt;
 */5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
Checking the health of the RAID array&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DOS attacks ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Handling_a_DoS_attack|Handling a DoS attack]] regarding how to handle a DOS attack.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== build ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;partition map:&lt;br /&gt;
/ 58g&lt;br /&gt;
swap 4g&lt;br /&gt;
/var 512m&lt;br /&gt;
/tmp 512m&lt;br /&gt;
/usr 5.5g&lt;br /&gt;
&lt;br /&gt;
4. edit /etc/make.conf &lt;br /&gt;
echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=firewall2 \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
&lt;br /&gt;
5. add settings to /boot/loader.conf and /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;-Dh&amp;quot; &amp;gt;&amp;gt; /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &#039;console=&amp;quot;comconsole,vidconsole&amp;quot; \&lt;br /&gt;
boot_multicons=&amp;quot;YES&amp;quot; \&lt;br /&gt;
boot_serial=&amp;quot;YES&amp;quot; \&lt;br /&gt;
comconsole_speed=&amp;quot;115200&amp;quot;&#039; &amp;gt;&amp;gt; /boot/loader.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. turn off all ttyv&#039;s except 0 and 1 in /etc/ttys&lt;br /&gt;
also turn on ttyd0, change type to vt100:&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&lt;br /&gt;
ttyv2   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv3   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv4   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv5   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv6   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv7   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
# Serial terminals&lt;br /&gt;
# The &#039;dialup&#039; keyword identifies dialin lines to login, fingerd etc.&lt;br /&gt;
ttyd0   &amp;quot;/usr/libexec/getty std.9600&amp;quot;   vt100   on  secure&lt;br /&gt;
&lt;br /&gt;
kill -1 1&lt;br /&gt;
&lt;br /&gt;
on console server:&lt;br /&gt;
vi /etc/remote&lt;br /&gt;
(rename port to jail8 depending on where and which digi plugged into)&lt;br /&gt;
test serial console&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. populate hosts&lt;br /&gt;
echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
&lt;br /&gt;
8. put key in authorized_keys on backup3&lt;br /&gt;
cd&lt;br /&gt;
ssh-keygen -t dsa -b 1024 &lt;br /&gt;
(default location, leave password blank)&lt;br /&gt;
&lt;br /&gt;
Punch a hole in firewall1 to allow traffic to backup servers @ castle:&lt;br /&gt;
&lt;br /&gt;
ipfw add 99 allow ip from 66.181.18.0/27 to 69.55.230.10 22&lt;br /&gt;
ipfw add 95 allow ip from 66.181.18.0/27 to 69.55.230.11 22&lt;br /&gt;
&lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup3 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
confirm that you can ssh to backup3 and backup 2 without getting a login prompt&lt;br /&gt;
&lt;br /&gt;
ssh backup3 hostname&lt;br /&gt;
&lt;br /&gt;
ssh backup2 hostname&lt;br /&gt;
&lt;br /&gt;
ssh backup1 hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit root&#039;s path and login script:&lt;br /&gt;
vi /root/.cshrc&lt;br /&gt;
&lt;br /&gt;
Change alias entries (add G):&lt;br /&gt;
alias la        ls -aG&lt;br /&gt;
alias lf        ls -FAG&lt;br /&gt;
alias ll        ls -lAG&lt;br /&gt;
alias ls        ls -AG&lt;br /&gt;
alias mbm       mb mount&lt;br /&gt;
alias mbu       mb umount&lt;br /&gt;
&lt;br /&gt;
and alter the prompt, set the following:&lt;br /&gt;
set prompt = &amp;quot;`/bin/hostname -s` %/# &amp;quot;&lt;br /&gt;
&lt;br /&gt;
11. install cvsup&lt;br /&gt;
cd /usr/ports/net/cvsup-without-gui &lt;br /&gt;
make install clean; rehash; mail -s &#039;cvs installed&#039; dave.boodman@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
12. get latest sources for this release:&lt;br /&gt;
cd /usr/src &lt;br /&gt;
echo &amp;quot;*default host=cvsup4.freebsd.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_6_4\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
src-all&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup ; mail -s &#039;cvs sup done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
13. configure new kernel. &lt;br /&gt;
&lt;br /&gt;
cd /usr/src/sys/i386/conf &lt;br /&gt;
scp backup2:/mnt/data4/build/freebsd/firewall2-6.4 ./firewall2&lt;br /&gt;
&lt;br /&gt;
15. build, install kernel and world&lt;br /&gt;
&lt;br /&gt;
cd /boot&lt;br /&gt;
&lt;br /&gt;
mv kernel kernel.GENERIC&lt;br /&gt;
cd kernel.GENERIC&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make buildkernel installkernel&lt;br /&gt;
&lt;br /&gt;
make buildworld ; mail -s &#039;buildworld done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&lt;br /&gt;
(2450: 1:56min, supermicro: 59mins, 2950: 38mins)&lt;br /&gt;
make installworld &lt;br /&gt;
(2450: 3min, supermicro: 1min, 2950: :34)&lt;br /&gt;
mergemaster -i&lt;br /&gt;
&lt;br /&gt;
17. populate /etc/rc.conf with IPs and NFS settings&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_bce1=&amp;quot;inet 10.1.2.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;66.181.18.2&amp;quot;&lt;br /&gt;
hostname=&amp;quot;firewall2.johncompanies.com&amp;quot;&lt;br /&gt;
ifconfig_bge0=&amp;quot;inet 66.181.18.3  netmask 255.255.255.224&amp;quot;&lt;br /&gt;
ifconfig_bge1=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
20. reboot. Confirm new kernel is loaded&lt;br /&gt;
&lt;br /&gt;
uname -a&lt;br /&gt;
&lt;br /&gt;
21. update ports:&lt;br /&gt;
cd /usr/ports&lt;br /&gt;
echo &amp;quot;*default host=cvsup4.FreeBSD.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_6_4\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
ports-all tag=.&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup; mail -s &#039;cvs sup ports done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
22. Install raid mgmt tool&lt;br /&gt;
&lt;br /&gt;
# linux base&lt;br /&gt;
cd /usr/ports/devel/libtool22&lt;br /&gt;
make install base&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/emulators/linux_base-fc4&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
#linux-megamgr-5.20&lt;br /&gt;
cd /usr/ports/sysutils/linux-megamgr&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
# megarc-1.51&lt;br /&gt;
cd /usr/ports/sysutils/megarc&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
Test:&lt;br /&gt;
rehash; megarc -ldInfo -a0 -l0&lt;br /&gt;
&lt;br /&gt;
23. install rsync from ports&lt;br /&gt;
cd /usr/ports/net/rsync&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
25. install bb client&lt;br /&gt;
adduser&lt;br /&gt;
Username: bb&lt;br /&gt;
Full name: bb&lt;br /&gt;
Uid (Leave empty for default): 1984&lt;br /&gt;
Login group [bb]:&lt;br /&gt;
Login group is bb. Invite bb into other groups? []:&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]: &lt;br /&gt;
Home directory [/home/bb]:&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: yes&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : bb&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : bb&lt;br /&gt;
Uid        : 1984&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : bb&lt;br /&gt;
Home       : /home/bb&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): yes&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb&lt;br /&gt;
scp backup2:/mnt/data4/build/bb/bb-freebsd.tar .&lt;br /&gt;
tar xvf bb-freebsd.tar&lt;br /&gt;
&lt;br /&gt;
edit /home/bb/bbc1.9e-btf/etc/bb-hosts with something like:&lt;br /&gt;
echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
10.1.2.1 firewall2.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
vi /home/bb/bbc1.9e-btf/ext/openfiles &lt;br /&gt;
MACHINE=&amp;quot;firewall2,johncompanies,com&amp;quot;      # HAS TO BE IN A,B,C FORM&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb/bbc1.9e-btf/etc&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&lt;br /&gt;
make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
&lt;br /&gt;
vi /usr/home/bb/bbc1.9e-btf/bin/bb-cpu.sh&lt;br /&gt;
        $1 $TOPARGS &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
#        /usr/local/jail/bin/jtop &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
echo &#039;su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&#039; &amp;gt; /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
chmod +x /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
&lt;br /&gt;
Punch a hole in the firewall to allow it to communicate with bb monitor:&lt;br /&gt;
&lt;br /&gt;
ipfw add 00096 allow ip from 66.181.18.0/27 to 69.55.230.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
27. configure bb on mail:&lt;br /&gt;
vi /usr/home/bb/bbsrc/bb1.9e-btf/etc/bb-hosts&lt;br /&gt;
66.181.18.3 firewall2.johncompanies.com # ssh&lt;br /&gt;
&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
bbsrc/bb/runbb.sh restart ; exit&lt;br /&gt;
&lt;br /&gt;
29. configure ntp&lt;br /&gt;
echo &amp;quot;server 10.1.2.1&amp;quot; &amp;gt; /etc/ntp.conf&lt;br /&gt;
&lt;br /&gt;
/usr/sbin/ntpd -p /var/run/ntpd.pid&lt;br /&gt;
sleep 2; ntpq -p&lt;br /&gt;
(confirm it’s able to reach our time server)&lt;br /&gt;
&lt;br /&gt;
echo &#039;/usr/sbin/ntpd -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift&#039; &amp;gt; /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
&lt;br /&gt;
30. fwd and reverse lookups on ns1c&lt;br /&gt;
vr johncompanies.com&lt;br /&gt;
 (edit the PTR too)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
33. setup backups&lt;br /&gt;
echo &#039;#\!/bin/sh\&lt;br /&gt;
backupdir=/data/firewall2/current\&lt;br /&gt;
\&lt;br /&gt;
## ENTRY /etc &#039; &amp;gt; /usr/local/etc/backup.config&lt;br /&gt;
&lt;br /&gt;
on backup3:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
ssh backup3 mkdir -p /data/firewall2/current&lt;br /&gt;
&lt;br /&gt;
on backup3, add the system to &lt;br /&gt;
vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
scp 69.55.230.2:/usr/local/etc/rsync.backup /usr/local/etc/rsync.backup &lt;br /&gt;
vi /usr/local/etc/rsync.backup &lt;br /&gt;
backup1 &amp;gt; backup3&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
34. mkdir /root/logs&lt;br /&gt;
&lt;br /&gt;
35. edit sshd_config for security&lt;br /&gt;
vi /etc/ssh/sshd_config&lt;br /&gt;
ListenAddress 66.181.18.3&lt;br /&gt;
ListenAddress 10.1.2.1&lt;br /&gt;
&lt;br /&gt;
kill -1 `cat /var/run/sshd.pid`&lt;br /&gt;
&lt;br /&gt;
35. raid chk&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/sbin/lsiraidchk&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
my @out = split &amp;quot;\n&amp;quot;, `megarc -ldInfo -a0 -Lall|grep Status:`;&lt;br /&gt;
&lt;br /&gt;
foreach (@out) {&lt;br /&gt;
    if ($_ =~ /DEGRADED/) { $date = `date`; chomp $date; `echo &amp;quot;$date: RAID ARRAY DEGRADED&amp;quot; &amp;gt;&amp;gt; /var/log/messages`; }&lt;br /&gt;
#print $_;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
36. add crontab entries&lt;br /&gt;
crontab -e&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl; &lt;br /&gt;
*/5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
&lt;br /&gt;
#10 0 * * * rm /var/spool/clientmqueue/*&lt;br /&gt;
&lt;br /&gt;
scp /etc/makefwrules.pl user@64.163.14.48:~&lt;br /&gt;
scp /etc/makepiperules.pl user@64.163.14.48:~&lt;br /&gt;
mv /home/user/makefwrules.pl /etc&lt;br /&gt;
mv /home/user/makepiperules.pl /etc&lt;br /&gt;
touch /etc/firewall.sh&lt;br /&gt;
mkdir /etc/oldrules/&lt;br /&gt;
&lt;br /&gt;
other binaries&lt;br /&gt;
	&lt;br /&gt;
scp /usr/local/bin/rulemaker user@64.163.14.48:~&lt;br /&gt;
mv ~user/rulemaker /usr/local/sbin&lt;br /&gt;
scp ~user/Sendmail.pm user@64.163.14.48:~&lt;br /&gt;
scp ~user/doswatch.pl user@64.163.14.48:~&lt;br /&gt;
&lt;br /&gt;
Setup basic ruleset&lt;br /&gt;
&lt;br /&gt;
ipfw add 00009 count udp from any to any&lt;br /&gt;
ipfw add 00010 allow tcp from any to any established&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags syn tcpoptions !mss&lt;br /&gt;
ipfw add 00012 deny icmp from any to any icmptypes 4,5,9,10,12,13,14,15,16,17,18&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags syn,fin&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags fin,psh,rst,urg&lt;br /&gt;
ipfw add 00012 allow icmp from any to any&lt;br /&gt;
ipfw add 00014 deny tcp from any to any dst-port 135&lt;br /&gt;
ipfw add 00150 skipto 65535 ip from any to any via em1 in&lt;br /&gt;
&lt;br /&gt;
IPKVM3:&lt;br /&gt;
00098 allow ip from { 69.55.230.6 or 69.55.230.7 } to 69.55.230.10 dst-port 139&lt;br /&gt;
00098 deny ip from any to 69.55.230.10 dst-port 139&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= firewall3 =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine is the backup firewall for the network at i2b. &lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab ?&lt;br /&gt;
* OS: FreeBSD 9.1 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.2.5, Pub IPs: 66.181.18.3 (external), 69.55.229.1 &amp;amp; 69.55.231.1 (internal). It has 3 network connections (2 onboard, 1 PCI) connecting to the external, internal and private networks. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The internal network NIC is the left one on the motherboard (69.55.229.1/24, ...).&lt;br /&gt;
The external network NIC is the right one on the motherboard (66.181.18.3/28).&lt;br /&gt;
The PCI ethernet card is connected to our private network (10.1.2.5/24).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Hardware: 2 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: 160 GB (2 x 160GB) RAID1 array running on an LSI MegaRAID SCSI 320 PCI RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* firewall (ipfw)&lt;br /&gt;
* bigbrother&lt;br /&gt;
&lt;br /&gt;
== Firewall Rule Configuration ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Firewall_Rule_Configuration|Firewall Rule Configuration]] for more discussion on how to actually manipulate firewall rules.&lt;br /&gt;
&lt;br /&gt;
== Disaster Recovery ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To put the backup firewall3 into service:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Move the internal cable (to our networks) from firewall2 to em1 which is the left most ethernet port (69.55.229.1).&lt;br /&gt;
Move the external cable (to outside world) from firewall2 to em0 which is the port to the right on the motherboard (66.181.18.3).&lt;br /&gt;
The PCI ethernet port (fxp0) should already be connected to private network (10.1.2.5).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what you need to put in /etc/rc.conf to get a firewall going (as far as routes and IPs)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;66.181.18.2&amp;quot;&lt;br /&gt;
hostname=&amp;quot;firewall3.johncompanies.com&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;inet 66.181.18.3  netmask 255.255.255.224&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_em1=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias0=&amp;quot;inet 69.55.231.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# ifconfig_em1_alias1=&amp;quot;inet 65.50.228.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias2=&amp;quot;inet 65.50.229.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias3=&amp;quot;inet 65.50.230.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias4=&amp;quot;inet 65.50.231.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias5=&amp;quot;inet 65.50.232.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias6=&amp;quot;inet 65.50.233.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias7=&amp;quot;inet 65.50.234.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias8=&amp;quot;inet 65.50.235.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.5 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
Backup to backup3&lt;br /&gt;
&lt;br /&gt;
 0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
Reset counters and remove pipe rules on the 1st of the month. Pay attention when setting up a rule as 3 4 5 (that&#039;s not a temporary traffic cap). &lt;br /&gt;
&lt;br /&gt;
 59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
 3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
Capture counts periodically&lt;br /&gt;
&lt;br /&gt;
 0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl;&lt;br /&gt;
This script will create &amp;lt;tt&amp;gt;/etc/firewall.sh&amp;lt;/tt&amp;gt; which contains all the firewall and pipe rules in place at the time the script was run.&lt;br /&gt;
&lt;br /&gt;
 */5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
Checking the health of the RAID array&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DOS attacks ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Handling_a_DoS_attack|Handling a DoS attack]] regarding how to handle a DOS attack.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== build ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;partition map:&lt;br /&gt;
/ 58g&lt;br /&gt;
swap 4g&lt;br /&gt;
/var 512m&lt;br /&gt;
/tmp 512m&lt;br /&gt;
/usr 5.5g&lt;br /&gt;
&lt;br /&gt;
4. edit /etc/make.conf &lt;br /&gt;
echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=firewall3 \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
&lt;br /&gt;
5. add settings to /boot/loader.conf and /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;-Dh&amp;quot; &amp;gt;&amp;gt; /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &#039;console=&amp;quot;comconsole,vidconsole&amp;quot; \&lt;br /&gt;
boot_multicons=&amp;quot;YES&amp;quot; \&lt;br /&gt;
boot_serial=&amp;quot;YES&amp;quot; \&lt;br /&gt;
comconsole_speed=&amp;quot;115200&amp;quot;&#039; &amp;gt;&amp;gt; /boot/loader.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. turn off all ttyv&#039;s except 0 and 1 in /etc/ttys&lt;br /&gt;
also turn on ttyd0, change type to vt100:&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&lt;br /&gt;
ttyv2   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv3   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv4   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv5   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv6   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv7   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
# Serial terminals&lt;br /&gt;
# The &#039;dialup&#039; keyword identifies dialin lines to login, fingerd etc.&lt;br /&gt;
ttyd0   &amp;quot;/usr/libexec/getty std.9600&amp;quot;   vt100   on  secure&lt;br /&gt;
&lt;br /&gt;
kill -1 1&lt;br /&gt;
&lt;br /&gt;
on console server:&lt;br /&gt;
vi /etc/remote&lt;br /&gt;
(rename port to jail8 depending on where and which digi plugged into)&lt;br /&gt;
test serial console&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. populate hosts&lt;br /&gt;
echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
&lt;br /&gt;
8. put key in authorized_keys on backup3&lt;br /&gt;
cd&lt;br /&gt;
ssh-keygen -t dsa -b 1024 &lt;br /&gt;
(default location, leave password blank)&lt;br /&gt;
&lt;br /&gt;
Punch a hole in firewall1 to allow traffic to backup servers @ castle:&lt;br /&gt;
&lt;br /&gt;
ipfw add 99 allow ip from 66.181.18.0/27 to 69.55.230.10 22&lt;br /&gt;
ipfw add 95 allow ip from 66.181.18.0/27 to 69.55.230.11 22&lt;br /&gt;
&lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup3 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
confirm that you can ssh to backup3 and backup 2 without getting a login prompt&lt;br /&gt;
&lt;br /&gt;
ssh backup3 hostname&lt;br /&gt;
&lt;br /&gt;
ssh backup2 hostname&lt;br /&gt;
&lt;br /&gt;
ssh backup1 hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit root&#039;s path and login script:&lt;br /&gt;
vi /root/.cshrc&lt;br /&gt;
&lt;br /&gt;
Change alias entries (add G):&lt;br /&gt;
alias la        ls -aG&lt;br /&gt;
alias lf        ls -FAG&lt;br /&gt;
alias ll        ls -lAG&lt;br /&gt;
alias ls        ls -AG&lt;br /&gt;
alias mbm       mb mount&lt;br /&gt;
alias mbu       mb umount&lt;br /&gt;
&lt;br /&gt;
and alter the prompt, set the following:&lt;br /&gt;
set prompt = &amp;quot;`/bin/hostname -s` %/# &amp;quot;&lt;br /&gt;
&lt;br /&gt;
11. install cvsup&lt;br /&gt;
cd /usr/ports/net/cvsup-without-gui &lt;br /&gt;
make install clean; rehash; mail -s &#039;cvs installed&#039; 8583619553@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
12. get latest sources for this release:&lt;br /&gt;
cd /usr/src &lt;br /&gt;
echo &amp;quot;*default host=cvsup4.freebsd.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_9_1\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
src-all&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup ; mail -s &#039;cvs sup done&#039; 8583619553@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
13. configure new kernel. &lt;br /&gt;
&lt;br /&gt;
cd /usr/src/sys/amd64/conf &lt;br /&gt;
scp backup2:/mnt/data4/build/freebsd/firewall3-9.1 ./firewall3&lt;br /&gt;
&lt;br /&gt;
15. build, install kernel and world&lt;br /&gt;
&lt;br /&gt;
cd /boot&lt;br /&gt;
&lt;br /&gt;
mv kernel kernel.GENERIC&lt;br /&gt;
cd kernel.GENERIC&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make buildkernel installkernel&lt;br /&gt;
&lt;br /&gt;
make buildworld ; mail -s &#039;buildworld done&#039; 8583619553@vtext.com &amp;lt; /dev/null&lt;br /&gt;
(supermicro: 2:15 mins, 2950: 38? mins)&lt;br /&gt;
make installworld &lt;br /&gt;
(2450: 3min, supermicro: 1min, 2950: :34)&lt;br /&gt;
mergemaster -i&lt;br /&gt;
&lt;br /&gt;
17. populate /etc/rc.conf with IPs and NFS settings&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
&lt;br /&gt;
hostname=&amp;quot;firewall3.johncompanies.com&amp;quot;&lt;br /&gt;
#  external network&lt;br /&gt;
ifconfig_em0=&amp;quot;inet 66.181.18.3  netmask 255.255.255.224&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#  internal network&lt;br /&gt;
ifconfig_em1=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias0=&amp;quot;inet 69.55.231.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_em1_alias1=&amp;quot;inet 65.50.228.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias2=&amp;quot;inet 65.50.229.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias3=&amp;quot;inet 65.50.230.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias4=&amp;quot;inet 65.50.231.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias5=&amp;quot;inet 65.50.232.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias6=&amp;quot;inet 65.50.233.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias7=&amp;quot;inet 65.50.234.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias8=&amp;quot;inet 65.50.235.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;66.181.18.2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#  private network&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.5 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
# powerd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
20. reboot. Confirm new kernel is loaded&lt;br /&gt;
&lt;br /&gt;
uname -a&lt;br /&gt;
&lt;br /&gt;
21. update ports:&lt;br /&gt;
cd /usr/ports&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;*default host=cvsup4.FreeBSD.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_9_1\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
ports-all tag=.&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup; mail -s &#039;cvs sup ports done&#039; 8583619553@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
22. Install raid mgmt tool&lt;br /&gt;
&lt;br /&gt;
# linux base&lt;br /&gt;
cd /usr/ports/devel/libtool22&lt;br /&gt;
make install base&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/emulators/linux_base-fc4&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
scp backup2:/d4/build/3ware/tw_cli-freebsd-x86_64-9.5.0.1.tgz /usr/local/sbin&lt;br /&gt;
cd /usr/local/sbin &lt;br /&gt;
tar xzvf tw_cli-freebsd-x86_64-9.5.0.1.tgz&lt;br /&gt;
rm tw_cli-freebsd-x86_64-9.5.0.1.tgz&lt;br /&gt;
&lt;br /&gt;
23. install rsync from ports&lt;br /&gt;
cd /usr/ports/net/rsync&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
25. install bb client&lt;br /&gt;
adduser&lt;br /&gt;
Username: bb&lt;br /&gt;
Full name: bb&lt;br /&gt;
Uid (Leave empty for default): 1984&lt;br /&gt;
Login group [bb]:&lt;br /&gt;
Login group is bb. Invite bb into other groups? []:&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]: &lt;br /&gt;
Home directory [/home/bb]:&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: yes&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : bb&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : bb&lt;br /&gt;
Uid        : 1984&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : bb&lt;br /&gt;
Home       : /home/bb&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): yes&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb&lt;br /&gt;
scp backup2:/mnt/data4/build/bb/bb-freebsd.tar .&lt;br /&gt;
tar xvf bb-freebsd.tar&lt;br /&gt;
&lt;br /&gt;
edit /home/bb/bbc1.9e-btf/etc/bb-hosts with something like:&lt;br /&gt;
echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
10.1.2.5 firewall3.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
vi /home/bb/bbc1.9e-btf/ext/openfiles &lt;br /&gt;
MACHINE=&amp;quot;firewall3,johncompanies,com&amp;quot;      # HAS TO BE IN A,B,C FORM&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb/bbc1.9e-btf/etc&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&lt;br /&gt;
make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
&lt;br /&gt;
vi /usr/home/bb/bbc1.9e-btf/bin/bb-cpu.sh&lt;br /&gt;
        $1 $TOPARGS &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
#        /usr/local/jail/bin/jtop &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
echo &#039;su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&#039; &amp;gt; /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
chmod +x /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
&lt;br /&gt;
Punch a hole in the firewall to allow it to communicate with bb monitor:&lt;br /&gt;
&lt;br /&gt;
ipfw add 00096 allow ip from 66.181.18.0/27 to 69.55.230.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
27. configure bb on mail:&lt;br /&gt;
vi /usr/home/bb/bbsrc/bb1.9e-btf/etc/bb-hosts&lt;br /&gt;
66.181.18.3 firewall3.johncompanies.com # ssh&lt;br /&gt;
&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
bbsrc/bb/runbb.sh restart ; exit&lt;br /&gt;
&lt;br /&gt;
29. configure ntp&lt;br /&gt;
echo &amp;quot;server 10.1.2.1&amp;quot; &amp;gt; /etc/ntp.conf&lt;br /&gt;
&lt;br /&gt;
/usr/sbin/ntpd -p /var/run/ntpd.pid&lt;br /&gt;
sleep 2; ntpq -p&lt;br /&gt;
(confirm it’s able to reach our time server)&lt;br /&gt;
&lt;br /&gt;
echo &#039;/usr/sbin/ntpd -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift&#039; &amp;gt; /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
&lt;br /&gt;
30. fwd and reverse lookups on ns1c&lt;br /&gt;
vr johncompanies.com&lt;br /&gt;
 (edit the PTR too)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
33. setup backups&lt;br /&gt;
echo &#039;#\!/bin/sh\&lt;br /&gt;
backupdir=/data/firewall2/current\&lt;br /&gt;
\&lt;br /&gt;
## ENTRY /etc &#039; &amp;gt; /usr/local/etc/backup.config&lt;br /&gt;
&lt;br /&gt;
on backup3:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
ssh backup3 mkdir -p /data/firewall2/current&lt;br /&gt;
&lt;br /&gt;
on backup3, add the system to &lt;br /&gt;
vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
scp 69.55.230.2:/usr/local/etc/rsync.backup /usr/local/etc/rsync.backup &lt;br /&gt;
vi /usr/local/etc/rsync.backup &lt;br /&gt;
backup1 &amp;gt; backup3&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
34. mkdir /root/logs&lt;br /&gt;
&lt;br /&gt;
35. edit sshd_config for security&lt;br /&gt;
vi /etc/ssh/sshd_config&lt;br /&gt;
ListenAddress 66.181.18.3&lt;br /&gt;
ListenAddress 10.1.2.5&lt;br /&gt;
&lt;br /&gt;
kill -1 `cat /var/run/sshd.pid`&lt;br /&gt;
&lt;br /&gt;
35. raid chk&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/sbin/lsiraidchk&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
my @out = split &amp;quot;\n&amp;quot;, `megarc -ldInfo -a0 -Lall|grep Status:`;&lt;br /&gt;
&lt;br /&gt;
foreach (@out) {&lt;br /&gt;
    if ($_ =~ /DEGRADED/) { $date = `date`; chomp $date; `echo &amp;quot;$date: RAID ARRAY DEGRADED&amp;quot; &amp;gt;&amp;gt; /var/log/messages`; }&lt;br /&gt;
#print $_;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
36. add crontab entries&lt;br /&gt;
crontab -e&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl; &lt;br /&gt;
*/5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
&lt;br /&gt;
#10 0 * * * rm /var/spool/clientmqueue/*&lt;br /&gt;
&lt;br /&gt;
scp /etc/makefwrules.pl user@64.163.14.48:~&lt;br /&gt;
scp /etc/makepiperules.pl user@64.163.14.48:~&lt;br /&gt;
mv /home/user/makefwrules.pl /etc&lt;br /&gt;
mv /home/user/makepiperules.pl /etc&lt;br /&gt;
touch /etc/firewall.sh&lt;br /&gt;
mkdir /etc/oldrules/&lt;br /&gt;
&lt;br /&gt;
other binaries&lt;br /&gt;
	&lt;br /&gt;
scp /usr/local/bin/rulemaker user@64.163.14.48:~&lt;br /&gt;
mv ~user/rulemaker /usr/local/sbin&lt;br /&gt;
scp ~user/Sendmail.pm user@64.163.14.48:~&lt;br /&gt;
scp ~user/doswatch.pl user@64.163.14.48:~&lt;br /&gt;
&lt;br /&gt;
Setup basic ruleset&lt;br /&gt;
&lt;br /&gt;
ipfw add 00009 count udp from any to any&lt;br /&gt;
ipfw add 00010 allow tcp from any to any established&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags syn tcpoptions !mss&lt;br /&gt;
ipfw add 00012 deny icmp from any to any icmptypes 4,5,9,10,12,13,14,15,16,17,18&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags syn,fin&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags fin,psh,rst,urg&lt;br /&gt;
ipfw add 00012 allow icmp from any to any&lt;br /&gt;
ipfw add 00014 deny tcp from any to any dst-port 135&lt;br /&gt;
ipfw add 00150 skipto 65535 ip from any to any via em1 in&lt;br /&gt;
&lt;br /&gt;
IPKVM3:&lt;br /&gt;
00098 allow ip from { 69.55.230.6 or 69.55.230.7 } to 69.55.230.10 dst-port 139&lt;br /&gt;
00098 deny ip from any to 69.55.230.10 dst-port 139&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= wiki =&lt;br /&gt;
&lt;br /&gt;
The wiki (mediawiki) runs on nat2 in a jail running off 69.55.229.8&lt;br /&gt;
&lt;br /&gt;
The backup wiki lives on virt13 in CT 5 / 69.55.230.18&lt;br /&gt;
&lt;br /&gt;
== Setup jail ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /mnt/data1/wiki-dir&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make installworld DESTDIR=/mnt/data1/wiki-dir&lt;br /&gt;
cd etc&lt;br /&gt;
make distribution DESTDIR=/mnt/data1/wiki-dir&lt;br /&gt;
&lt;br /&gt;
mount -t devfs devfs /mnt/data1/wiki-dir/dev&lt;br /&gt;
devfs -m /mnt/data1/wiki-dir/dev rule -s 3 applyset &lt;br /&gt;
&lt;br /&gt;
cd /mnt/data1/wiki-dir&lt;br /&gt;
&lt;br /&gt;
ln -sf dev/null kernel&lt;br /&gt;
&lt;br /&gt;
scp jail9:/usr/local/sbin/jkill /mnt/data1/wiki-dir/sbin&lt;br /&gt;
&lt;br /&gt;
jail /mnt/data1/wiki-dir wiki.johncompanies.com 69.55.229.8 /bin/sh&lt;br /&gt;
csh&lt;br /&gt;
&lt;br /&gt;
touch /etc/fstab&lt;br /&gt;
echo &#039;network_interfaces=&amp;quot;&amp;quot;\&lt;br /&gt;
hostname=&amp;quot;wiki.johncompanies.com&amp;quot;\&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;\&lt;br /&gt;
sendmail_enable=&amp;quot;YES&amp;quot;\&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&#039; &amp;gt; /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;nameserver 69.55.229.3\&lt;br /&gt;
nameserver 69.55.225.225&amp;quot; &amp;gt;&amp;gt; /etc/resolv.conf &lt;br /&gt;
&lt;br /&gt;
vi /etc/crontab&lt;br /&gt;
 &lt;br /&gt;
(remove the adjkerntz lines )&lt;br /&gt;
 &lt;br /&gt;
vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
 &lt;br /&gt;
replace: MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&lt;br /&gt;
 with: MP=&#039;/&#039; (use single quotes) &lt;br /&gt;
&lt;br /&gt;
mkdir -p /usr/compat/linux/dev&lt;br /&gt;
 &lt;br /&gt;
adduser&lt;br /&gt;
&lt;br /&gt;
Username: user&lt;br /&gt;
Full name: user&lt;br /&gt;
Uid (Leave empty for default):&lt;br /&gt;
Login group [user]:&lt;br /&gt;
Login group is user. Invite user into other groups? []: wheel&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]:&lt;br /&gt;
Home directory [/home/user]:&lt;br /&gt;
Home directory permissions (Leave empty for default):&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: y&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : user&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : user&lt;br /&gt;
Uid        : 1001&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : user&lt;br /&gt;
Home       : /home/user&lt;br /&gt;
Home Mode  :&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): y&lt;br /&gt;
adduser: INFO: Successfully added (user) to the user database.&lt;br /&gt;
adduser: INFO: Password for (user) is: 901gmYjO&lt;br /&gt;
Add another user? (yes/no): n&lt;br /&gt;
Goodbye! &lt;br /&gt;
&lt;br /&gt;
vi /usr/home/user/.profile &lt;br /&gt;
TERM=vt100;     export TERM&lt;br /&gt;
 &lt;br /&gt;
tzsetup&lt;br /&gt;
 &lt;br /&gt;
newaliases &lt;br /&gt;
 &lt;br /&gt;
rm /sbin/halt /sbin/reboot&lt;br /&gt;
ln /sbin/jkill /sbin/halt&lt;br /&gt;
ln /sbin/jkill /sbin/reboot&lt;br /&gt;
 &lt;br /&gt;
vi /etc/syslog.conf&lt;br /&gt;
#*.err;kern.warning;auth.notice;mail.crit               /dev/console&lt;br /&gt;
*.err;kern.warning;auth.notice;mail.crit                /var/log/messages &lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
cd libexec&lt;br /&gt;
chflags noschg ld-elf32.so.1&lt;br /&gt;
chflags noschg ld-elf.so.1&lt;br /&gt;
mv ld-elf32.so.1 ld-elf32.so.1-orig&lt;br /&gt;
ln ld-elf.so.1 ld-elf32.so.1&lt;br /&gt;
chflags schg ld-elf.so.1&lt;br /&gt;
chflags schg ld-elf32.so.1&lt;br /&gt;
 &lt;br /&gt;
cp -r /usr/ports /mnt/data1/wiki-dir/usr &lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/etc/rc.d/wiki.sh&lt;br /&gt;
mount -t devfs devfs /mnt/data1/wiki-dir/dev/&lt;br /&gt;
devfs -m /mnt/data1/wiki-dir/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/wiki-dir wiki.johncompanies.com 69.55.229.8 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/wiki.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mediawiki setup ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/net/rsync&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/distfiles/&lt;br /&gt;
fetch http://downloads.mysql.com/archives/mysql-5.5/mysql-5.5.4-m3.tar.gz&lt;br /&gt;
cd /usr/ports/databases/mysql55-server&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/distfiles/&lt;br /&gt;
fetch http://downloads.php.net/johannes/php-5.3.2.tar.bz2&lt;br /&gt;
cd /usr/ports/lang/php52&lt;br /&gt;
make install clean&lt;br /&gt;
(build apache module)&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/lang/php5-extensions&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/www/apache22&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
cd /usr/local/www/&lt;br /&gt;
fetch http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.2.tar.gz&lt;br /&gt;
tar xzf mediawiki-1.19.2.tar.gz&lt;br /&gt;
mv mediawiki-1.19.2 wiki&lt;br /&gt;
&lt;br /&gt;
vi /usr/local/etc/apache22/httpd.conf&lt;br /&gt;
DocumentRoot &amp;quot;/usr/local/www/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Include etc/apache22/extra/vhost-wiki.conf&lt;br /&gt;
Listen 443&lt;br /&gt;
&lt;br /&gt;
&amp;lt;IfModule mod_php5.c&amp;gt;&lt;br /&gt;
    &amp;lt;FilesMatch &amp;quot;\.ph(p3?|tml)$&amp;quot;&amp;gt;&lt;br /&gt;
        SetHandler application/x-httpd-php&lt;br /&gt;
    &amp;lt;/FilesMatch&amp;gt;&lt;br /&gt;
    &amp;lt;FilesMatch &amp;quot;\.phps$&amp;quot;&amp;gt;&lt;br /&gt;
        SetHandler application/x-httpd-php-source&lt;br /&gt;
    &amp;lt;/FilesMatch&amp;gt;&lt;br /&gt;
    # To re-enable php in user directories comment the following lines&lt;br /&gt;
    # (from &amp;lt;IfModule ...&amp;gt; to &amp;lt;/IfModule&amp;gt;.) Do NOT set it to On as it&lt;br /&gt;
    # prevents .htaccess files from disabling it.&lt;br /&gt;
    &amp;lt;IfModule mod_userdir.c&amp;gt;&lt;br /&gt;
        &amp;lt;Directory /home/*/public_html&amp;gt;&lt;br /&gt;
            php_admin_value engine Off&lt;br /&gt;
        &amp;lt;/Directory&amp;gt;&lt;br /&gt;
    &amp;lt;/IfModule&amp;gt;&lt;br /&gt;
&amp;lt;/IfModule&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/etc/apache22/extra/vhost-wiki.conf &lt;br /&gt;
&amp;lt;VirtualHost *:443&amp;gt;&lt;br /&gt;
        ServerAdmin support@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
        DocumentRoot /usr/local/www/wiki&lt;br /&gt;
#        &amp;lt;Directory /&amp;gt;&lt;br /&gt;
#                Options FollowSymLinks&lt;br /&gt;
#                AllowOverride None&lt;br /&gt;
#                Order deny,allow&lt;br /&gt;
#        &amp;lt;/Directory&amp;gt;&lt;br /&gt;
        &amp;lt;Directory /usr/local/www/wiki&amp;gt;&lt;br /&gt;
                Options Indexes FollowSymLinks MultiViews&lt;br /&gt;
                Deny from all&lt;br /&gt;
                AllowOverride AuthConfig&lt;br /&gt;
                Order allow,deny&lt;br /&gt;
                DirectoryIndex index.php&lt;br /&gt;
                #Allow from 69.55.233.195&lt;br /&gt;
                #Allow from boody.dyndns.org&lt;br /&gt;
        &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
        ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/&lt;br /&gt;
        &amp;lt;Directory &amp;quot;/usr/lib/cgi-bin&amp;quot;&amp;gt;&lt;br /&gt;
                AllowOverride None&lt;br /&gt;
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch&lt;br /&gt;
                Order allow,deny&lt;br /&gt;
                Allow from all&lt;br /&gt;
        &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
        ErrorLog /var/log/httpd-error.log&lt;br /&gt;
&lt;br /&gt;
        # Possible values include: debug, info, notice, warn, error, crit,&lt;br /&gt;
        # alert, emerg.&lt;br /&gt;
        LogLevel warn&lt;br /&gt;
&lt;br /&gt;
        CustomLog /var/log/httpd-access.log combined&lt;br /&gt;
&lt;br /&gt;
    Alias /doc/ &amp;quot;/usr/share/doc/&amp;quot;&lt;br /&gt;
    &amp;lt;Directory &amp;quot;/usr/share/doc/&amp;quot;&amp;gt;&lt;br /&gt;
        Options Indexes MultiViews FollowSymLinks&lt;br /&gt;
        AllowOverride None&lt;br /&gt;
        Order deny,allow&lt;br /&gt;
        Deny from all&lt;br /&gt;
        Allow from 127.0.0.0/255.0.0.0 ::1/128&lt;br /&gt;
    &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    SSLEngine on&lt;br /&gt;
    SSLCertificateFile /usr/local/etc/apache22/ssl/server.crt&lt;br /&gt;
    SSLCertificateKeyFile /usr/local/etc/apache22/ssl/server.key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&lt;br /&gt;
mkdir ssl&lt;br /&gt;
cd ssl&lt;br /&gt;
&lt;br /&gt;
openssl req -days 1999 -new -x509 -nodes -out server.crt -keyout server.key&lt;br /&gt;
US&lt;br /&gt;
CA&lt;br /&gt;
San Diego&lt;br /&gt;
johncompanies.com&lt;br /&gt;
johncompanies.com&lt;br /&gt;
wiki.johncompanies.com&lt;br /&gt;
support@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/www/wiki/.htaccess&lt;br /&gt;
AuthType Basic&lt;br /&gt;
AuthUserFile /usr/local/etc/apache22/wiki.passwd&lt;br /&gt;
AuthName wiki&lt;br /&gt;
require valid-user&lt;br /&gt;
satisfy any&lt;br /&gt;
&lt;br /&gt;
cd /usr/local/etc/apache22&lt;br /&gt;
htpasswd -c wiki.passwd admin&lt;br /&gt;
&lt;br /&gt;
https://69.55.229.8/index.php&lt;br /&gt;
&lt;br /&gt;
use mysql (innodb)&lt;br /&gt;
wiki name: JCWiki&lt;br /&gt;
Support / (mail pass) / support@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/www/wiki/LocalSettings.php&lt;br /&gt;
&lt;br /&gt;
&amp;lt;?php&lt;br /&gt;
# This file was automatically generated by the MediaWiki 1.19.2&lt;br /&gt;
# installer. If you make manual changes, please keep track in case you&lt;br /&gt;
# need to recreate them later.&lt;br /&gt;
#&lt;br /&gt;
# See includes/DefaultSettings.php for all configurable settings&lt;br /&gt;
# and their default values, but don&#039;t forget to make changes in _this_&lt;br /&gt;
# file, not there.&lt;br /&gt;
#&lt;br /&gt;
# Further documentation for configuration settings may be found at:&lt;br /&gt;
# http://www.mediawiki.org/wiki/Manual:Configuration_settings&lt;br /&gt;
&lt;br /&gt;
# Protect against web entry&lt;br /&gt;
if ( !defined( &#039;MEDIAWIKI&#039; ) ) {&lt;br /&gt;
	exit;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
## Uncomment this to disable output compression&lt;br /&gt;
# $wgDisableOutputCompression = true;&lt;br /&gt;
&lt;br /&gt;
$wgSitename      = &amp;quot;JCWiki&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## The URL base path to the directory containing the wiki;&lt;br /&gt;
## defaults for all runtime URL paths are based off of this.&lt;br /&gt;
## For more information on customizing the URLs please see:&lt;br /&gt;
## http://www.mediawiki.org/wiki/Manual:Short_URL&lt;br /&gt;
$wgScriptPath       = &amp;quot;&amp;quot;;&lt;br /&gt;
$wgScriptExtension  = &amp;quot;.php&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## The protocol and server name to use in fully-qualified URLs&lt;br /&gt;
$wgServer           = &amp;quot;https://69.55.229.8&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## The relative URL path to the skins directory&lt;br /&gt;
$wgStylePath        = &amp;quot;$wgScriptPath/skins&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## The relative URL path to the logo.  Make sure you change this from the default,&lt;br /&gt;
## or else you&#039;ll overwrite your logo when you upgrade!&lt;br /&gt;
#$wgLogo             = &amp;quot;$wgStylePath/common/images/wiki.png&amp;quot;;&lt;br /&gt;
$wgLogo             = &amp;quot;$wgStylePath/common/images/jclogo.gif&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## UPO means: this is also a user preference option&lt;br /&gt;
&lt;br /&gt;
$wgEnableEmail      = true;&lt;br /&gt;
$wgEnableUserEmail  = true; # UPO&lt;br /&gt;
&lt;br /&gt;
$wgEmergencyContact = &amp;quot;apache@69.55.229.8&amp;quot;;&lt;br /&gt;
$wgPasswordSender   = &amp;quot;apache@69.55.229.8&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
$wgEnotifUserTalk      = false; # UPO&lt;br /&gt;
$wgEnotifWatchlist     = false; # UPO&lt;br /&gt;
$wgEmailAuthentication = true;&lt;br /&gt;
&lt;br /&gt;
## Database settings&lt;br /&gt;
$wgDBtype           = &amp;quot;mysql&amp;quot;;&lt;br /&gt;
$wgDBserver         = &amp;quot;localhost&amp;quot;;&lt;br /&gt;
$wgDBname           = &amp;quot;my_wiki&amp;quot;;&lt;br /&gt;
$wgDBuser           = &amp;quot;root&amp;quot;;&lt;br /&gt;
$wgDBpassword       = &amp;quot;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# MySQL specific settings&lt;br /&gt;
$wgDBprefix         = &amp;quot;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# MySQL table options to use during installation or update&lt;br /&gt;
$wgDBTableOptions   = &amp;quot;ENGINE=InnoDB, DEFAULT CHARSET=binary&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Experimental charset support for MySQL 5.0.&lt;br /&gt;
$wgDBmysql5 = false;&lt;br /&gt;
&lt;br /&gt;
## Shared memory settings&lt;br /&gt;
$wgMainCacheType    = CACHE_NONE;&lt;br /&gt;
$wgMemCachedServers = array();&lt;br /&gt;
&lt;br /&gt;
## To enable image uploads, make sure the &#039;images&#039; directory&lt;br /&gt;
## is writable, then set this to true:&lt;br /&gt;
$wgEnableUploads  = false;&lt;br /&gt;
#$wgUseImageMagick = true;&lt;br /&gt;
#$wgImageMagickConvertCommand = &amp;quot;/usr/bin/convert&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# InstantCommons allows wiki to use images from http://commons.wikimedia.org&lt;br /&gt;
$wgUseInstantCommons  = false;&lt;br /&gt;
&lt;br /&gt;
## If you use ImageMagick (or any other shell command) on a&lt;br /&gt;
## Linux server, this will need to be set to the name of an&lt;br /&gt;
## available UTF-8 locale&lt;br /&gt;
$wgShellLocale = &amp;quot;en_US.utf8&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## If you want to use image uploads under safe mode,&lt;br /&gt;
## create the directories images/archive, images/thumb and&lt;br /&gt;
## images/temp, and make them all writable. Then uncomment&lt;br /&gt;
## this, if it&#039;s not already uncommented:&lt;br /&gt;
#$wgHashedUploadDirectory = false;&lt;br /&gt;
&lt;br /&gt;
## Set $wgCacheDirectory to a writable directory on the web server&lt;br /&gt;
## to make your wiki go slightly faster. The directory should not&lt;br /&gt;
## be publically accessible from the web.&lt;br /&gt;
#$wgCacheDirectory = &amp;quot;$IP/cache&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Site language code, should be one of the list in ./languages/Names.php&lt;br /&gt;
$wgLanguageCode = &amp;quot;en&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
$wgSecretKey = &amp;quot;abc699ef26890b49b4055430f8ebbd25e84cce21a7e53aeaec4d4313af4c9739&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Site upgrade key. Must be set to a string (default provided) to turn on the&lt;br /&gt;
# web installer while LocalSettings.php is in place&lt;br /&gt;
$wgUpgradeKey = &amp;quot;3196710f4a7d7332&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## Default skin: you can change the default skin. Use the internal symbolic&lt;br /&gt;
## names, ie &#039;standard&#039;, &#039;nostalgia&#039;, &#039;cologneblue&#039;, &#039;monobook&#039;, &#039;vector&#039;:&lt;br /&gt;
$wgDefaultSkin = &amp;quot;vector&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## For attaching licensing metadata to pages, and displaying an&lt;br /&gt;
## appropriate copyright notice / icon. GNU Free Documentation&lt;br /&gt;
## License and Creative Commons licenses are supported so far.&lt;br /&gt;
$wgRightsPage = &amp;quot;&amp;quot;; # Set to the title of a wiki page that describes your license/copyright&lt;br /&gt;
$wgRightsUrl  = &amp;quot;&amp;quot;;&lt;br /&gt;
$wgRightsText = &amp;quot;&amp;quot;;&lt;br /&gt;
$wgRightsIcon = &amp;quot;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Path to the GNU diff3 utility. Used for conflict resolution.&lt;br /&gt;
$wgDiff3 = &amp;quot;/usr/bin/diff3&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Query string length limit for ResourceLoader. You should only set this if&lt;br /&gt;
# your web server has a query string length limit (then set it to that limit),&lt;br /&gt;
# or if you have suhosin.get.max_value_length set in php.ini (then set it to&lt;br /&gt;
# that value)&lt;br /&gt;
$wgResourceLoaderMaxQueryLength = -1;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# End of automatically generated settings.&lt;br /&gt;
# Add more configuration options below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== copy/backup wiki ==&lt;br /&gt;
on main/primary wiki:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/etc/rc.d/mysql-server stop&lt;br /&gt;
ssh 69.55.230.18 &amp;quot;/etc/init.d/mysql stop&amp;quot;&lt;br /&gt;
rsync -av /var/db/mysql/my_wiki/ 69.55.230.18:/var/lib/mysql/my_wiki/&lt;br /&gt;
rsync -av /var/db/mysql/ib* 69.55.230.18:/var/lib/mysql/&lt;br /&gt;
/usr/local/etc/rc.d/mysql-server start&lt;br /&gt;
ssh 69.55.230.18 &amp;quot;/etc/init.d/mysql start&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Infrastructure_Machines&amp;diff=2212</id>
		<title>Infrastructure Machines</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Infrastructure_Machines&amp;diff=2212"/>
		<updated>2020-03-17T18:24:23Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* virt11 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= jails =&lt;br /&gt;
&lt;br /&gt;
== jail1 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, SHUTDOWN&lt;br /&gt;
* OS: FreeBSD 6.2 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.101 (PCI nic), Pub IP: 69.55.230.107 (onboard)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: gvinum&lt;br /&gt;
&lt;br /&gt;
== jail2 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-16&lt;br /&gt;
* OS: FreeBSD 7.2 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.102, Pub IP: 69.55.228.53 (2 onboard nics)&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, two 300 GB (4 x 300GB) RAID1 arrays running on an LSI-based, Dell-branded (PERC 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.232 &lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
== jail3 ==&lt;br /&gt;
* Location: I2b  SHUTDOWN&lt;br /&gt;
* OS: FreeBSD 8.3 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.2.103, Pub IP: 69.55.229.7 (2 onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SATA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 300 GB (2 x 300GB) RAID1 array running on a 3ware 8006-2LP RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* &#039;&#039;&#039;We should not add users to this server since it is at I2B&#039;&#039;&#039;&lt;br /&gt;
* must be ssh&#039;d to from nat2&lt;br /&gt;
* is a super jail for customer col01737&lt;br /&gt;
&lt;br /&gt;
== jail4 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-17&lt;br /&gt;
* OS: FreeBSD 9.1 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.104, Pub IP: 69.55.228.104 (2 onboard nics) &lt;br /&gt;
* Hardware: Dell 2850. 6 x 300GB SCSI drives (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply.&lt;br /&gt;
* CPU:  2 x Intel(R) Xeon(TM) CPU 2.80GHz (8 virtual CPUs)&lt;br /&gt;
* RAM:  16 GB ( 4 x 4GB Reg ECC )&lt;br /&gt;
* Drives: one 1.4 TB RAID 5 array (6 x 300GB SCSI) Dell-branded (PERC 4e)LSI megarc RAID card.&lt;br /&gt;
* Remote management: None &lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
Only FreeBSD 9.1 jail&lt;br /&gt;
Not upgraded to FBSD 9.2 or 9.3 because too many libraries modified (would require customers to rebuild apps).&lt;br /&gt;
&lt;br /&gt;
== jail5 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: FreeBSD 10.1 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.105, Pub IP: 69.55.230.105 (2 onboard nics) &lt;br /&gt;
* Hardware: Supermicro JC-14004 - Intel S1200BTL motherboard - 6 SATA/SAS drive bays (2 colums of 3), Dual power supply.&lt;br /&gt;
* CPU:  1 x Intel(R) Xeon(TM) E3-1230 V2 CPU 3.30 GHz (8 virtual CPUs)&lt;br /&gt;
* RAM:  32 GB ( 4 x 8GB ECC )&lt;br /&gt;
* Drives: 1x80 GB SATA SSD on motherboard + one 2.6 TB RAID 5 array 4x1 TB + 3ware 9650 RAID card.&lt;br /&gt;
* Remote management: Intel RMM 4 - 10.1.4.235&lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
Only FreeBSD 10.1 jail used for bhyve virtuals.&lt;br /&gt;
&lt;br /&gt;
Use ~+Ctrl-D to disconnect from console (vm attach colXXXXX).&lt;br /&gt;
&lt;br /&gt;
== jail6 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-16&lt;br /&gt;
* OS: FreeBSD 10.3 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.106, Pub IP: 69.55.230.106 (2 onboard nics) &lt;br /&gt;
* Hardware: Supermicro JC-14004 - Intel S1200BTL motherboard - 6 SATA/SAS drive bays (2 colums of 3), Dual power supply.&lt;br /&gt;
* CPU:  1 x Intel(R) Xeon(TM) E3-1230 V2 CPU 3.30 GHz (8 virtual CPUs)&lt;br /&gt;
* RAM:  32 GB ( 4 x 8GB ECC )&lt;br /&gt;
* Drives: one 2.7 TB ZFS RAID 6 array 5x1 TB &lt;br /&gt;
* Remote management: Intel RMM 4 - 10.1.4.236&lt;br /&gt;
* Disk accounting: zfs&lt;br /&gt;
&lt;br /&gt;
== jail7 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-5  SCHEUDLED SHUTDOWN 9/30/19&lt;br /&gt;
* OS: FreeBSD 6.3 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.107, Pub IP: 69.55.230.108 (2 onboard nics)&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 146 GB (4 x 146GB) RAID1 arrays, one 74 GB (2 x 74GB) RAID1 array running on an LSI-based, Dell-branded (PERC 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.237&lt;br /&gt;
* Disk accounting: gvinum&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
Do not run a verify while OS/jails running, will crash.&lt;br /&gt;
&lt;br /&gt;
== jail8 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: FreeBSD 8.0 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.108, Pub IP: 69.55.234.2 (2 onboard nics)&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 300 GB (2 x 300GB) RAID1 array, one 400 GB (2 x 400GB) RAID1 array, running on an LSI-based, Dell-branded (PERC 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.238&lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== jail9 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: FreeBSD 8.2 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.109, Pub IP: 69.55.232.36 (2 onboard nics)&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 400 GB (2 x 300GB) RAID1 array running on an LSI-based, Dell-branded (PERC 5/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.239&lt;br /&gt;
* Disk accounting: md&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== jail11 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 4.7 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.111 (PCI nic), Pub IP: 69.55.236.92 (onboard)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: vinum&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== mx1 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, SHUTDOWN AND SCRAPPED&lt;br /&gt;
* OS: FreeBSD 4.11 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.201 (PCI nic), Pub IP: 69.55.237.3 (onboard)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 36 GB (2 x 36GB) RAID1 array, one 74 GB (2 x 74GB) RAID1 array running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: vinum&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* is our (old) backup mail/dns vps service host&lt;br /&gt;
&lt;br /&gt;
== mx2 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, SHUTDOWN AND SCRAPPED&lt;br /&gt;
* OS: FreeBSD 7.1 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.202 (PCI nic), Pub IP: 69.55.237.90 (onboard)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Disk accounting: gvinum&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* is our latest backup mail/dns vps service host&lt;br /&gt;
&lt;br /&gt;
== jail17 ==&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 4.10 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.117 (PCI nic), Pub IP: 69.55.228.2 (onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCA SCSI drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays, one 146 GB (2 x 146GB) RAID1 array, running on an Adaptec 2120S  RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: vinum&lt;br /&gt;
* Host of devweb.johncompanies.com and www.utopian.com/mail.utopian.com&lt;br /&gt;
&lt;br /&gt;
== jail18 ==&lt;br /&gt;
* Location: castle, cab 3-5  SCHEDULED SHUTDOWN 9/30/19&lt;br /&gt;
* OS: FreeBSD 4.10 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.118 (PCI nic), Pub IP: 69.55.228.2 (onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCA SCSI drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (4 x 74GB) RAID1 arrays, one 146 GB (2 x 146GB) RAID1 array, running on an Adaptec 2120S  RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: vinum&lt;br /&gt;
* Host of ns2c.johncompanies.com (now on ns2c.johncompanies.com on ganeti)&lt;br /&gt;
&lt;br /&gt;
== jail19 ==&lt;br /&gt;
* Location: castle, cab 3-5  SCHEDULED SHUTDOWN 9/30/19&lt;br /&gt;
* OS: FreeBSD 6.1 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.119 (PCI nic), Pub IP: 69.55.228.200 (onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCA SCSI drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 74 GB (2 x 74GB) RAID1 array, one 146 GB (2 x 146GB) RAID1 array, running on an Adaptec 2120S  RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Disk accounting: gvinum&lt;br /&gt;
&lt;br /&gt;
= virts =&lt;br /&gt;
&lt;br /&gt;
== quar1 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, SHUTDOWN AND SCRAPPED&lt;br /&gt;
* OS: RedHat 7.3 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.151 (PCI nic), Pub IP: 69.55.227.2 (onboard nic)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 36 GB (2 x 36GB) RAID1 array, one 74 GB (2 x 74GB) RAID1 array, running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Virtuozzo version: 2.6.1&lt;br /&gt;
* VZ license: hwid=23C0.C0E1.6FDD.08BA.8971.8E1C.EBD5.1EDC serial=0DE6.903E.E239.E23F.470C.4369.4104.A5A4&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* used to be the home of customers who&#039;s VE&#039;s would just run out of control/badly&lt;br /&gt;
* has a max of 10 VE&#039;s allowed to run&lt;br /&gt;
&lt;br /&gt;
== virt9 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: RedHat 7.3 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.59 (PCI nic), Pub IP: 69.55.226.161 (onboard nic)&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 74 GB (2 x 74GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Virtuozzo version: 2.6.1&lt;br /&gt;
* VZ license: hwid=BC15.B4D6.0D25.A5FE.F3BA.D518.E351.AE3F serial=F6AD.B6B4.5650.8869.C97C.73EE.AF65.FA8B&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== virt11 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: CentOS 5.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.61, Pub IP: 69.55.238.3, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 400 GB (2 x 400GB) RAID1 array, one 146 GB (2 x 146GB) RAID1 array, running on an LSI-based, Dell-branded (perc 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.211&lt;br /&gt;
* Virtuozzo version: 4.0.0&lt;br /&gt;
* VZ license: hwid=&amp;quot;029D.A187.78E1.480F.49E3.E20A.7389.7F79&amp;quot; serial=&amp;quot;163C.F3E2.195F.96B5.2D38.8937.9600.4A05&amp;quot;&lt;br /&gt;
  key_number=&amp;quot;VZ.00172378.0006&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== virt12 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: CentOS 5.2 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.62, Pub IP: 69.55.227.70, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 300 GB (2 x 300GB) RAID1 arrays one 400 GB (2 x 400GB) RAID1 array, running on an LSI-based, Dell-branded (perc 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.212&lt;br /&gt;
* Virtuozzo version: 4.0.0&lt;br /&gt;
* VZ license: hwid=&amp;quot;0C53.A413.E095.B4F4.51BC.D740.6919.A77B&amp;quot; serial=&amp;quot;84E5.9498.3759.E683.E24B.2514.CA72.DC31&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== virt13 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-17&lt;br /&gt;
* Switch port:  P13-&lt;br /&gt;
* OS: CentOS 6.2 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.63, Pub IP: 69.55.226.2, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* CPU:    2 x Intel(R) Xeon(R) CPU E5420  @ 2.50GHz  (8 virtual cores)&lt;br /&gt;
* RAM:    32 GB (8 x 4GB DDR2 FB-DIMM ECC 667MHz)&lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 600 GB (2 x 600GB) RAID1 array, one 146 GB (2 x 146GB) RAID1 array, running on an LSI-based, Dell-branded (perc 6/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.213&lt;br /&gt;
* Virtuozzo version: 4.7.0&lt;br /&gt;
* VZ license: hwid=&amp;quot;7D07.93BE.0B1F.7D2B.B039.4B5B.48B6.453B&amp;quot; serial=&amp;quot;60A4.A94C.44BB.DCD6.8D03.1778.605B.10FE&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* home to our latest/current signups&lt;br /&gt;
* currently the only 64bit vz host&lt;br /&gt;
&lt;br /&gt;
== virt14 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 6-16&lt;br /&gt;
* Switch Port: p13-&lt;br /&gt;
* OS: CentOS 6.4 x86_64&lt;br /&gt;
* Networking: Priv IP: 10.1.4.64 Pub IP: 69.55.225.14 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply.&lt;br /&gt;
* CPU: 2 x Xeon 5140 Dual Core @ 2.33GHz (4 virtual CPUs)&lt;br /&gt;
* RAM: 32 GB  (8 x 4GB Reg ECC)&lt;br /&gt;
* Drives: one 146 GB (2 x 146 GB SAS) RAID1 array, and one 1TB RAID1 array (2 x 1 TB SATA), running on an LSI-based, Dell-branded (perc 5/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.214&lt;br /&gt;
* Virtuozzo version: 4.7.0&lt;br /&gt;
* VZ license: hwid=&amp;quot;&amp;quot; serial=&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Notes ===&lt;br /&gt;
* our latest virt &lt;br /&gt;
* Temp server to offload Virt13 till we can get a Cloud going.&lt;br /&gt;
* virt 13 and 14 currently the only 64bit vz hosts&lt;br /&gt;
&lt;br /&gt;
== virt15 ==&lt;br /&gt;
&lt;br /&gt;
* Location: SHUTDOWN&lt;br /&gt;
* OS: RedHat 9 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.65, Pub IP: 69.55.232.160 (2 onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 74 GB (2 x 74GB) RAID1 array, two 146 GB (2 x 146GB) RAID1 arrays, running on an LSI MegaRAID SCSI 320-1 RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Virtuozzo version: 2.6.2&lt;br /&gt;
* VZ license: hwid=A90F.6F48.E723.D8BA.3025.184A.5B73.D11E serial=E94B.5164.C1E6.A67F.67D1.7D96.0B6C.5524&lt;br /&gt;
&lt;br /&gt;
== virt16 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: Fedora Core 4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.66, Pub IP: 69.55.232.2 (2 onboard nics)&lt;br /&gt;
* Hardware: Supermicro (custom build). 6 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 74 GB (2 x 74GB) RAID1 array, one 146 GB (2 x 146GB) RAID1 array, running on an LSI MegaRAID SCSI 320-1 RAID card.&lt;br /&gt;
* Remote management: none&lt;br /&gt;
* Virtuozzo version: 3.0.0&lt;br /&gt;
* VZ license: hwid=DEFA.A325.7230.BBC8.9715.8B52.3FD7.27BE serial=66C0.41EA.3FBB.11D3.9CC6.55C7.09AE.14AB&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== virt17 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: CentOS 4.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.67, Pub IP: 69.55.232.162, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 146 GB (2 x 146GB) RAID1 arrays running on an LSI-based, Dell-branded (perc 5/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.217&lt;br /&gt;
* Virtuozzo version: 3.0.0&lt;br /&gt;
* VZ license: hwid=2E14.AED9.70B8.C26E.D99F.B0D3.BCD2.229C serial=2A11.DAD0.61DB.E889.8DF4.9AF7.CF82.3C37&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== virt19 ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-6&lt;br /&gt;
* OS: CentOS 5.2 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.69, Pub IP: 69.55.236.2, 2 onboard nics&lt;br /&gt;
* Hardware: Dell 2950. 6 SATA/SAS drive bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: one 146 GB (2 x 146GB) RAID1 array, one 300 GB (2 x 300GB) RAID1 array, running on an LSI-based, Dell-branded (perc 5/i) RAID card.&lt;br /&gt;
* Remote management: [[DRAC/RMM|DRAC]] @ 10.1.4.219&lt;br /&gt;
* Virtuozzo version: 3.0.0&lt;br /&gt;
* VZ license: hwid=3968.13F7.B2AC.8952.8E19.13A9.6EF5.5822 serial=061D.84CD.CCE5.B213.15B5.C061.D6A7.B034&lt;br /&gt;
&lt;br /&gt;
= mail =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine (mail) is the swiss army knife of the company, playing host to many services and functions. &lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 4.10 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.5, Pub IPs: 69.55.230.2, 69.55.225.225 (ns1c jail), 69.55.230.9. 1 onboard and 1 PCI&lt;br /&gt;
* Hardware: Dell 2450. 4 SCSI SCA drive bays (2 columns of 2, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: two 36 GB (2 x 36GB) RAID1 arrays running on an Adaptec-based, Dell-branded (perc) RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* mail&lt;br /&gt;
* web&lt;br /&gt;
* mysql&lt;br /&gt;
* bigbrother server/pager&lt;br /&gt;
* snmp&lt;br /&gt;
* named in a jail (ns1c)&lt;br /&gt;
&lt;br /&gt;
== email ==&lt;br /&gt;
This server hosts mail for johncompanies.com (mail.johncompanies.com). Sendmail 8.13.6/8.13.6 is listening on 69.55.230.2 port 25 for incoming mail. Relaying is allowed per /etc/mail/relay-domains&lt;br /&gt;
Other addresses (aliases) are defined per /etc/mail/aliases&lt;br /&gt;
&lt;br /&gt;
The following active users have mail hosted on this server:&lt;br /&gt;
* dave&lt;br /&gt;
* linux&lt;br /&gt;
* support&lt;br /&gt;
* payments&lt;br /&gt;
* sales&lt;br /&gt;
* tech1&lt;br /&gt;
* info&lt;br /&gt;
&lt;br /&gt;
Traditionally, mail is checked via shell apps (pine). qpopper (pop3s) is running to allow mail downloading. Checking mail in this way causes an opened INBOX in pine to lock read-only. For this reason, we tee incoming mail to support and linux to tech1.&lt;br /&gt;
&lt;br /&gt;
Procmail rules are setup to filter spam and send text messages. They are enabled for info, support, linux, tech1, dave and can be found in ~/Procmail/, for example:&lt;br /&gt;
&amp;lt;pre&amp;gt;# more ~support/Procmail/rc.emergency&lt;br /&gt;
:0c # use c only if you want to forward a copy and file the original later&lt;br /&gt;
* ^Subject:.*\&amp;lt;emergency\&amp;gt;&lt;br /&gt;
* ! ^Subject:\&amp;lt;re\&amp;gt;&lt;br /&gt;
  {&lt;br /&gt;
   :0h&lt;br /&gt;
   FROMANDSUBJECT=|formail -XFrom: -XSubject:&lt;br /&gt;
&lt;br /&gt;
   :0fwh&lt;br /&gt;
   | /usr/local/bin/formail -I&amp;quot;Subject: &amp;quot; -I&amp;quot;To: pager@johncompanies.com&amp;quot; ; echo $FROMANDSUBJECT ; echo&lt;br /&gt;
&lt;br /&gt;
   :0&lt;br /&gt;
  ! -t&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
control: &amp;lt;tt&amp;gt;cd /etc/mail; make stop&amp;lt;/tt&amp;gt; (stop), &amp;lt;tt&amp;gt;cd /etc/mail; make start&amp;lt;/tt&amp;gt; (start)&lt;br /&gt;
&lt;br /&gt;
The following aliases are also in place:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;debian:         linux&lt;br /&gt;
jobs:   info&lt;br /&gt;
careers:        info&lt;br /&gt;
#reboot:         6128102202@txt.att.net&lt;br /&gt;
#reboot:         8582298897@vtext.com&lt;br /&gt;
reboot:         pager&lt;br /&gt;
#pager: 8582298897@vtext.com&lt;br /&gt;
pager:  4158718324@txt.att.net&lt;br /&gt;
tech1on:  &amp;quot;| /usr/local/sbin/tech1on.sh&amp;quot;&lt;br /&gt;
tech1off:  &amp;quot;| /usr/local/sbin/tech1off.sh&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To change them, edit &amp;lt;tt&amp;gt;/etc/aliases&amp;lt;/tt&amp;gt; and then run &amp;lt;tt&amp;gt;newaliases&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note on tech1: this address was setup as a read-only address to be mirrored on all email coming into support and linux. We set this up so we could easily check support mail via a pop client- popping email locks out the user in pine so checking support/linux directly via pop was not an option. When checking and responding to email that comes into tech1, care should be taken to make sure it is sent as/under an address other than tech1. This is cause tech1 is not monitored by support staff as closely as email to support/linux. Further, the tech on call may not be checking tech1. Lastly, because of the nature of the copying, you will sometimes notice certain automated email/notices are received 2x in support- this is because of/related to the tech1 mirror.&lt;br /&gt;
&lt;br /&gt;
To enable it (on mail, run):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;~support/tech1on.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To disable&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;~support/tech1off.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or via email:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;tech1on@johncompanies.com&lt;br /&gt;
tech1off@johncompanies.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IP Blocking ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
01000 deny ip from 188.92.72.5 to any&lt;br /&gt;
01003 deny ip from any to 122.49.31.50&lt;br /&gt;
01004 deny ip from 122.49.31.50 to any&lt;br /&gt;
01014 deny ip from 74.208.225.225 to any&lt;br /&gt;
01015 deny ip from any to 216.243.118.35&lt;br /&gt;
01016 deny ip from 216.243.118.35 to any&lt;br /&gt;
01017 deny ip from any to 216.243.118.36&lt;br /&gt;
01018 deny ip from 216.243.118.36 to any&lt;br /&gt;
01020 deny ip from 112.215.0.0/18 to any   2014-08-13 Blocked PT Excelcomindo Pratama (Indonesia) for fradulent credit card attempts &lt;br /&gt;
01020 deny ip from 112.215.64.0/20 to any  2014-08-13 Blocked PT Excelcomindo Pratama (Indonesia) for fradulent credit card attempts &lt;br /&gt;
01022 deny ip from 120.168.0.0/24 to any   2014-08-13 Blocked Indosat 3G Broadband (Indonesia) for fradulent credit card attempts&lt;br /&gt;
01022 deny ip from 120.175.213.0/24 to any 2014-08-13 Blocked Indosat 3G Broadband (Indonesia) for fradulent credit card attempts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== web == &lt;br /&gt;
&lt;br /&gt;
See [[Management_System_/_Public_Website_/_Signup|Management System / Public Website / Signup]]&lt;br /&gt;
&lt;br /&gt;
== mysql ==&lt;br /&gt;
mysql 4.1.22 is running on port 3306&lt;br /&gt;
&lt;br /&gt;
* datadir: &amp;lt;tt&amp;gt;/mnt/data1/db/mysql/&amp;lt;/tt&amp;gt;&lt;br /&gt;
* config: &amp;lt;tt&amp;gt;/etc/my.cnf&amp;lt;/tt&amp;gt;&lt;br /&gt;
* database: &amp;lt;tt&amp;gt;jc&amp;lt;/tt&amp;gt;&lt;br /&gt;
* control: &amp;lt;tt&amp;gt;/usr/local/etc/rc.d/mysql-server.sh stop&amp;lt;/tt&amp;gt; (stop), &amp;lt;tt&amp;gt;/usr/local/etc/rc.d/mysql-server.sh start&amp;lt;/tt&amp;gt; (start)&lt;br /&gt;
&lt;br /&gt;
== bigbrother ==&lt;br /&gt;
There is a client running on mail (which monitors the services running on mail and mail itself), installed under &amp;lt;tt&amp;gt;/usr/home/bb/bbc1.9e-btf&amp;lt;/tt&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
And the big brother pager/server (which displays information gathered from all bb-monitored machines, including mail) is installed under &amp;lt;tt&amp;gt;/usr/home/bb/bbsrc/bb1.9i-btf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Both are running under the user &amp;lt;tt&amp;gt;bb&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Refer to [[BigBrother]] for more about use.&lt;br /&gt;
&lt;br /&gt;
== DNS (ns1c.johncompanies.com) ==&lt;br /&gt;
ns1c is a jail running on the mail server, who&#039;s IP is 69.55.225.225&lt;br /&gt;
&lt;br /&gt;
It&#039;s running from &amp;lt;tt&amp;gt;/mnt/data1/ns1c-dir&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See [[DNS]] for more details&lt;br /&gt;
&lt;br /&gt;
== Usage and Notes ==&lt;br /&gt;
* always mounted to backup1 and backup2 via nfs: &lt;br /&gt;
&amp;lt;pre&amp;gt;backup2:/mnt/data1 on /backup (nfs)&lt;br /&gt;
backup2:/mnt/data2 on /backup2 (nfs)&lt;br /&gt;
backup2:/mnt/data3 on /backup3 (nfs)&lt;br /&gt;
backup2:/mnt/data4 on /backup4 (nfs)&lt;br /&gt;
backup1:/data on /backup1 (nfs)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
 * * * * * /usr/local/www/mgmt/mrtg/mrtg.sh &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
Gathers up data for our mrtg/load graphs&lt;br /&gt;
&lt;br /&gt;
 */5 * * * * /usr/local/bin/rsync -a root@nat2:/mnt/data1/mrtg/data/ /usr/local/www/mgmt/mrtg/data/&lt;br /&gt;
Gathers up data from i2b servers for our mrtg/load graphs&lt;br /&gt;
&lt;br /&gt;
 40 0 * * * /usr/local/bin/rsync -a root@nat2:&amp;quot;/mnt/data1/mrtg/*.cfg&amp;quot; /usr/local/www/mgmt/mrtg&lt;br /&gt;
Gathers up mrtg configuration (port names) from i2b switches for our mrtg/load graphs&lt;br /&gt;
&lt;br /&gt;
 41 0 * * * for f in `grep -l &amp;quot;mnt\/data1&amp;quot; /usr/local/www/mgmt/mrtg/switch-p*.cfg`; do cat $f | sed s#\/mnt\/data1#\/usr\/local\/www\/mgmt# &amp;gt; $f.new; mv $f.new $f; done&lt;br /&gt;
Gathers up mrtg configuration (port names) from castle switches for our mrtg/load graphs&lt;br /&gt;
&lt;br /&gt;
 1 0 1 * * cp /usr/local/www/mgmt/html/top20ip /usr/local/www/mgmt/html/top20ip_last&lt;br /&gt;
 1 0 1 * * cp /usr/local/www/mgmt/html/top20customers /usr/local/www/mgmt/html/top20customers_last&lt;br /&gt;
 2 * * * * /usr/local/www/cronjobs/top20ip.pl &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
 15 * * * * /usr/local/www/cronjobs/top20customer.pl &amp;gt; /dev/null 2&amp;gt;&amp;amp;1&lt;br /&gt;
 1 0 1 * * rm /usr/local/www/mgmt/html/bandtrack&lt;br /&gt;
Archiving and generation of bandwidth statistics presented in mgmt -&amp;gt; Reference -&amp;gt; Bandwidth&lt;br /&gt;
&lt;br /&gt;
 1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
Nightly backup script&lt;br /&gt;
&lt;br /&gt;
 0 1 * * * /usr/local/www/mgmt/awstats/wwwroot/cgi-bin/awstats.pl -config=jcpub -update&lt;br /&gt;
Public web traffic stats&lt;br /&gt;
&lt;br /&gt;
 15 0 * * * rm /usr/local/www/mgmt/bwgraphs/*.png&lt;br /&gt;
 16 0 * * * rm /usr/local/www/am/bwgraphs/*&lt;br /&gt;
Cleanup for graph-related temp data generated by customers using the bandwidth reports via the AM &lt;br /&gt;
&lt;br /&gt;
 10 0 1 * * /usr/local/www/cronjobs/monthly_bandwidth_report.pl&lt;br /&gt;
Monthly bandwidth overage report&lt;br /&gt;
&lt;br /&gt;
 */3 * * * * /usr/local/www/cronjobs/bbcheck.pl&lt;br /&gt;
Updates mgmt with bb monitoring issues&lt;br /&gt;
&lt;br /&gt;
 5 0 * * * /usr/local/www/cronjobs/shutdownreminder.pl&lt;br /&gt;
Emails customers reminding them of upcoming shutdown date&lt;br /&gt;
&lt;br /&gt;
 7 0 * * * /usr/local/www/cronjobs/invoice_email.pl&lt;br /&gt;
Emails customers who have invoices and are set to auto-email (currently no customer gets these)&lt;br /&gt;
&lt;br /&gt;
 8 */4 * * * /usr/local/www/cronjobs/mysqlrepchk.pl&lt;br /&gt;
Checking that we are properly replicating (mysql) traffic data from bwdb to backup1&lt;br /&gt;
&lt;br /&gt;
 16 0 1 * * /usr/local/www/cronjobs/purge_traffic.pl&lt;br /&gt;
Removed old traffic data from the traffic database (running on backup1)&lt;br /&gt;
&lt;br /&gt;
 */5 * * * * chmod 0700 /usr/local/www/ccard_orders/* &amp;amp;&amp;amp; mv /usr/local/www/ccard_orders/* /usr/local/www/ccard_orders/done&lt;br /&gt;
Secure credit card data: set root-read-only &lt;br /&gt;
&lt;br /&gt;
 25 0 * * * /usr/local/www/cronjobs/biller.pl&lt;br /&gt;
Enters service charges in customer billing ledgers&lt;br /&gt;
&lt;br /&gt;
 10 13 * * * /usr/local/www/cronjobs/pfp_batch_gather.pl&lt;br /&gt;
Looks for customers with balance due and active credit card on file, prepares a payflow batch&lt;br /&gt;
&lt;br /&gt;
 10 14 * * * /usr/local/www/cronjobs/pfp_batch_process.pl&lt;br /&gt;
Tries to collect ccard funds for items in payflow batch - communicates with payflow&lt;br /&gt;
&lt;br /&gt;
 15 13 * * * /usr/local/www/cronjobs/pb_batch_gather.pl&lt;br /&gt;
Looks for customers with balance due and active paypal billing agreement on file, prepares a paypal batch&lt;br /&gt;
&lt;br /&gt;
 15 14 * * * /usr/local/www/cronjobs/pb_batch_process.pl&lt;br /&gt;
Tries to collect paypal funds for items in paypal batch - communicates with paypal&lt;br /&gt;
&lt;br /&gt;
 0 7 * * 1 /usr/local/www/cronjobs/email_pmt_reminder.pl&lt;br /&gt;
Emails customers in arrears, reminding them to pay&lt;br /&gt;
&lt;br /&gt;
 0 0 1 * * /usr/bin/mail -s &#039;archive sent mail in pine&#039; support@johncompanies.com &amp;lt; /dev/null&lt;br /&gt;
Reminds us to archive sent mail&lt;br /&gt;
&lt;br /&gt;
 0 3 * * * /usr/local/bin/rsync -a isys.e-monitoring.net:/var/mail /backup2/isys; /usr/local/bin/rsync -a isys.e-monitoring.net:/usr/home /backup2/isys&lt;br /&gt;
Backup data on isys&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Adaptec_Controllers|Check RAID array]]&lt;br /&gt;
&lt;br /&gt;
== Building a new Mail Server ==&lt;br /&gt;
&lt;br /&gt;
=== Installations ===&lt;br /&gt;
&lt;br /&gt;
I used FreeBSD 11.2 &lt;br /&gt;
&lt;br /&gt;
The order is important especially for the Web Server.&lt;br /&gt;
&lt;br /&gt;
==== Web Server ====&lt;br /&gt;
&lt;br /&gt;
I used FreeBSD 11.2&lt;br /&gt;
  perl 5.26 &lt;br /&gt;
  OpenSSL 1.0.2o-freebsd&lt;br /&gt;
  pcre&lt;br /&gt;
  apache22 &lt;br /&gt;
  mod_perl2&lt;br /&gt;
  PayflowPro&lt;br /&gt;
  mariadb 55 server and client&lt;br /&gt;
&lt;br /&gt;
Installation order is important&lt;br /&gt;
&lt;br /&gt;
install perl 5.26.2 from ports&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ore&amp;gt;&lt;br /&gt;
cd /usr/ports/lang/perl5.26/&lt;br /&gt;
make &lt;br /&gt;
[X] PERL_64BITINT  Use 64 bit integers (on i386)&lt;br /&gt;
[X] USE_PERL       Rewrite links in /usr/bin&lt;br /&gt;
(the rest unchecked&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
install OpenSSL 1.0.2o-freebsd&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
install pcre&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
install Apache22&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/distfiles&lt;br /&gt;
fetch http://archive.apache.org/dist/httpd/httpd-2.2.32.tar.gz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/www/apache22/tmp&lt;br /&gt;
fetch --no-verify-peer http://mirror.nexcess.net/apache//httpd/httpd-2.2.34.tar.gz&lt;br /&gt;
tar xvzf httpd-2.2.34.tar.gz&lt;br /&gt;
./configure --prefix=/usr/local/apache --with-ssl=/usr/local/openssl/ --enable-ssl --enable-so --with-mpm=prefork --enable-threads --enable-mods-shared=&#039;mime alias setenvif dir&#039; --enable-modules=&#039;mime alias setenvif dir&#039; --with-pcre=/usr/local&lt;br /&gt;
make install&lt;br /&gt;
apachectl restart&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/www/apache22&lt;br /&gt;
echo &amp;quot;DEFAULT_VERSIONS+=apache=2.2&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
make DISABLE_VULNERABILITIES=yes&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
install mod_perl2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/www/mod_perl2&lt;br /&gt;
echo &amp;quot;DEFAULT_VERSIONS+=apache=2.2&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
make DISABLE_VULNERABILITIES=yes&lt;br /&gt;
make install&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
install mariadb&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/databases/mariadb-103-server&lt;br /&gt;
echo &amp;quot;DEFAULT_VERSIONS+=apache=2.2&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
make DISABLE_VULNERABILITIES=yes&lt;br /&gt;
make install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Mail Server ====&lt;br /&gt;
&lt;br /&gt;
I used Postfix for email&lt;br /&gt;
&lt;br /&gt;
==== DNS Server (ns1c.johncompanies.com) ====&lt;br /&gt;
&lt;br /&gt;
= ns2c =&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
* Location: castle, on lamphost ganeti cloud&lt;br /&gt;
* OS: FreeBSD 11.2 x86_64&lt;br /&gt;
* Networking: Pub IP: 69.55.230.3   Private access: gnt-instance console ns2c.johncompanies.com&lt;br /&gt;
* Hardware: on ganeti cloud gn6.jcihosting.net  secondary gn1.jcihosting.net&lt;br /&gt;
* CPU:  1&lt;br /&gt;
* RAM: 1 GB&lt;br /&gt;
* Drives: 10 GB&lt;br /&gt;
* Remote management:  gnt-instance console ns2c.johncompanies.com&lt;br /&gt;
&lt;br /&gt;
= nat =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This is the main machine to which we ssh and runs all our screen sessions. Further, it&#039;s ip runs in a special block which is not routed through the firewall and this is somewhat immune to DoS attacks which hobble our firewall. Lastly, it acts as a nat server for certain/random devices on the private network.&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 9.1 i386&lt;br /&gt;
* Networking: Priv IP: 10.1.4.1, Pub IPs: 69.55.233.195, 69.55.233.196, 69.55.233.197, 69.55.233.198, 69.55.233.199. 1 onboard and 1 PCI&lt;br /&gt;
* Hardware: Custom 1U. single power supply. &lt;br /&gt;
* Drives: one 8 GB IDE drive&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* nat&lt;br /&gt;
&lt;br /&gt;
== nat control ==&lt;br /&gt;
All rules are contained in and look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /etc/ipnat.rules&lt;br /&gt;
# www (was 69.55.230.12)&lt;br /&gt;
# virt19&lt;br /&gt;
#bimap fxp0 10.1.4.209/32 -&amp;gt; 69.55.233.198/32&lt;br /&gt;
# virt18&lt;br /&gt;
#bimap fxp0 10.1.4.208/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# virt13&lt;br /&gt;
#bimap fxp0 10.1.4.213/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# virt12&lt;br /&gt;
#bimap fxp0 10.1.4.212/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# virt17&lt;br /&gt;
bimap fxp0 10.1.4.217/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# virt11&lt;br /&gt;
#bimap fxp0 10.1.4.211/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# ASA&lt;br /&gt;
#bimap fxp0 10.1.4.172/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
# P1A&lt;br /&gt;
bimap fxp0 10.1.4.240/32 -&amp;gt; 69.55.233.197/32&lt;br /&gt;
#bimap fxp0 10.1.4.238/32 -&amp;gt; 69.55.233.197/32&lt;br /&gt;
# developer (was 69.55.230.17)&lt;br /&gt;
# jail2&lt;br /&gt;
#bimap fxp0 10.1.4.232/32 -&amp;gt; 69.55.233.198/32&lt;br /&gt;
# jail8&lt;br /&gt;
#bimap fxp0 10.1.4.238/32 -&amp;gt; 69.55.233.198/32&lt;br /&gt;
# jail9&lt;br /&gt;
#bimap fxp0 10.1.4.239/32 -&amp;gt; 69.55.233.198/32&lt;br /&gt;
# POLL&lt;br /&gt;
#BIMAP EM0 10.1.6.134/32 -&amp;gt; 69.55.230.20/32&lt;br /&gt;
# 1U SUN&lt;br /&gt;
#BIMAP EM0 10.1.4.4/32 -&amp;gt; 69.55.227.46/32&lt;br /&gt;
# ??&lt;br /&gt;
#BIMAP EM0 10.1.6.3/32 -&amp;gt; 69.55.230.100/32&lt;br /&gt;
# random machine&lt;br /&gt;
#bimap fxp0 10.1.6.13/32 -&amp;gt; 69.55.233.199/32&lt;br /&gt;
#bimap fxp0 10.1.4.232/32 -&amp;gt; 69.55.233.199/32&lt;br /&gt;
# OFFICE OUTBOUND TRAFFIC&lt;br /&gt;
#map fxp0 10.1.6.0/24 -&amp;gt; 0.0.0.0/32 proxy port ftp ftp/tcp&lt;br /&gt;
#map fxp0 10.1.6.0/24 -&amp;gt; 0.0.0.0/32&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A simple entry looks like:&lt;br /&gt;
 bimap fxp0 10.1.4.240/32 -&amp;gt; 69.55.233.197/32&lt;br /&gt;
Which essentially means make private IP 10.1.4.240 reachable on 69.55.233.197 and allow 10.1.4.240 to communicate with the public internet via 69.55.233.197&lt;br /&gt;
&lt;br /&gt;
To reload new rule config:&lt;br /&gt;
 ipnat -C -F -f /etc/ipnat.rules&lt;br /&gt;
&lt;br /&gt;
You may want to setup natting, as above, when you need to reach a DRAC card&#039;s web interface, wherin the DRAC card only has a private IP.&lt;br /&gt;
&lt;br /&gt;
= nat2 =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This is the main machine to which we ssh and runs all our screen sessions at i2b, and runs ns3c (this is kind of the what mail is to castle). Further, it&#039;s ip runs in IP space provided by i2b: 66.181.18.1 - 66.181.18.30, which is not routed through the firewall and this is somewhat immune to DoS attacks which hobble our firewall. Lastly, it acts as a nat server for certain/random devices on the private network.&lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab 6&lt;br /&gt;
* OS: FreeBSD 6.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.2.1, Pub IPs: 69.55.229.2, 69.55.229.3, 66.181.18.4, 66.181.18.5, 66.181.18.6, 66.181.18.7, 66.181.18.8, 66.181.18.9, 66.181.18.10, 66.181.18.11, 66.181.18.12, 66.181.18.13, 66.181.18.14 1 onboard and 1 PCI&lt;br /&gt;
* Hardware: Custom 2U. 6 drive bays, non-hot-swappable. single power supply. &lt;br /&gt;
* Drives: one 150 GB (2 x 150GB) RAID1 array running on a 3ware 8006 RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* nat&lt;br /&gt;
* bigbrother&lt;br /&gt;
* ns3c (jail)&lt;br /&gt;
* ntp&lt;br /&gt;
&lt;br /&gt;
== nat config ==&lt;br /&gt;
Here&#039;s what&#039;s currently nat&#039;d on nat2:&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /etc/ipnat.rules&lt;br /&gt;
# sample entry&lt;br /&gt;
#ATS-9&lt;br /&gt;
bimap em0 10.1.2.79/32 -&amp;gt; 66.181.18.14/32&lt;br /&gt;
#ATS-8&lt;br /&gt;
bimap em0 10.1.2.78/32 -&amp;gt; 66.181.18.13/32&lt;br /&gt;
#ATS-7&lt;br /&gt;
bimap em0 10.1.2.77/32 -&amp;gt; 66.181.18.12/32&lt;br /&gt;
#ATS-6&lt;br /&gt;
bimap em0 10.1.2.76/32 -&amp;gt; 66.181.18.6/32&lt;br /&gt;
#ATS-5&lt;br /&gt;
bimap em0 10.1.2.75/32 -&amp;gt; 66.181.18.7/32&lt;br /&gt;
#ATS-4&lt;br /&gt;
bimap em0 10.1.2.74/32 -&amp;gt; 66.181.18.8/32&lt;br /&gt;
#ATS-3&lt;br /&gt;
bimap em0 10.1.2.73/32 -&amp;gt; 66.181.18.9/32&lt;br /&gt;
#ATS-2&lt;br /&gt;
bimap em0 10.1.2.72/32 -&amp;gt; 66.181.18.10/32&lt;br /&gt;
#ATS-1&lt;br /&gt;
bimap em0 10.1.2.71/32 -&amp;gt; 66.181.18.11/32&lt;br /&gt;
#bwdb2&lt;br /&gt;
bimap em0 10.1.2.4/32 -&amp;gt; 66.181.18.5/32&lt;br /&gt;
&lt;br /&gt;
# spare&lt;br /&gt;
&lt;br /&gt;
map em0 10.1.2.0/24 -&amp;gt; 0.0.0.0/32 proxy port ftp ftp/tcp&lt;br /&gt;
&lt;br /&gt;
#bimap fxp0 10.1.6.49/32 -&amp;gt; 10.1.1.2/32&lt;br /&gt;
#bimap fxp0 10.1.4.208/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== build ==&lt;br /&gt;
&lt;br /&gt;
* partition map:&lt;br /&gt;
&amp;lt;pre&amp;gt;/ 512m&lt;br /&gt;
swap 1G&lt;br /&gt;
/var 256m&lt;br /&gt;
/tmp 256m&lt;br /&gt;
/usr 5g&lt;br /&gt;
/mnt/data1 ~&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* edit /etc/make.conf &lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=nat2 \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* add settings to /boot/loader.conf and /boot.config&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;-Dh&amp;quot; &amp;gt;&amp;gt; /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &#039;console=&amp;quot;comconsole,vidconsole&amp;quot; \&lt;br /&gt;
boot_multicons=&amp;quot;YES&amp;quot; \&lt;br /&gt;
boot_serial=&amp;quot;YES&amp;quot; \&lt;br /&gt;
comconsole_speed=&amp;quot;115200&amp;quot;&#039; &amp;gt;&amp;gt; /boot/loader.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* turn off all ttyv&#039;s except 0 and 1 in /etc/ttys&lt;br /&gt;
also turn on ttyd0, change type to vt100:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ttys&lt;br /&gt;
&lt;br /&gt;
ttyv2   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv3   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv4   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv5   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv6   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv7   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
# Serial terminals&lt;br /&gt;
# The &#039;dialup&#039; keyword identifies dialin lines to login, fingerd etc.&lt;br /&gt;
ttyd0   &amp;quot;/usr/libexec/getty std.9600&amp;quot;   vt100   on  secure&lt;br /&gt;
&lt;br /&gt;
kill -1 1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on console server:&lt;br /&gt;
 vi /etc/remote&lt;br /&gt;
(rename port to jail8 depending on where and which digi plugged into)&lt;br /&gt;
test serial console&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* populate hosts&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.4 bwdb2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* put key in authorized_keys on backup3&lt;br /&gt;
&amp;lt;pre&amp;gt;cd&lt;br /&gt;
ssh-keygen -t dsa -b 1024&amp;lt;/pre&amp;gt; &lt;br /&gt;
(default location, leave password blank)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat /root/.ssh/id_dsa.pub | ssh backup3 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039;&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
confirm that you can ssh to backup3 and backup 2 without getting a login prompt&lt;br /&gt;
&lt;br /&gt;
 ssh backup3 hostname&lt;br /&gt;
&lt;br /&gt;
 ssh backup2 hostname&lt;br /&gt;
&lt;br /&gt;
 ssh backup1 hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* edit root&#039;s path and login script:&lt;br /&gt;
 vi /root/.cshrc&lt;br /&gt;
&lt;br /&gt;
Change alias entries (add G):&lt;br /&gt;
&amp;lt;pre&amp;gt;alias la        ls -aG&lt;br /&gt;
alias lf        ls -FAG&lt;br /&gt;
alias ll        ls -lAG&lt;br /&gt;
alias ls        ls -AG&lt;br /&gt;
alias mbm       mb mount&lt;br /&gt;
alias mbu       mb umount&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and alter the prompt, set the following:&lt;br /&gt;
 set prompt = &amp;quot;`/bin/hostname -s` %/# &amp;quot;&lt;br /&gt;
&lt;br /&gt;
* install cvsup&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/net/cvsup-without-gui &lt;br /&gt;
make install clean; rehash; mail -s &#039;cvs installed&#039; dave.boodman@vtext.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* get latest sources for this release:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src &lt;br /&gt;
echo &amp;quot;*default host=cvsup4.freebsd.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_6_4\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
src-all&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup ; mail -s &#039;cvs sup done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* configure new kernel. &lt;br /&gt;
&lt;br /&gt;
 cd /usr/src/sys/i386/conf &lt;br /&gt;
 scp backup2:/mnt/data4/build/freebsd/nat2-6.4 ./nat2&lt;br /&gt;
&lt;br /&gt;
* build, install kernel and world&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /boot&lt;br /&gt;
&lt;br /&gt;
mv kernel kernel.GENERIC&lt;br /&gt;
cd kernel.GENERIC&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make buildkernel installkernel&lt;br /&gt;
&lt;br /&gt;
make buildworld ; mail -s &#039;buildworld done&#039; support@johncompanies.com &amp;lt; /dev/null&lt;br /&gt;
make installworld &lt;br /&gt;
mergemaster -i&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* populate /etc/rc.conf with IPs and NFS settings&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
hostname=&amp;quot;nat2.johncompanies.com&amp;quot;&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
xntpd_flags=&amp;quot;-A -p /var/run/ntpd.pid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;inet 10.1.6.50 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em0=&amp;quot;inet 69.55.229.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em0_alias0=&amp;quot;inet 69.55.229.229 netmask 255.255.255.255&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 69.55.229.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_fxp0_alias0=&amp;quot;inet 69.55.229.3 netmask 255.255.255.255&amp;quot;&lt;br /&gt;
ifconfig_fxp1=&amp;quot;inet 10.1.2.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
defaultrouter=&amp;quot;10.1.6.1&amp;quot;&lt;br /&gt;
#defaultrouter=&amp;quot; 66.181.14.250&amp;quot;&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ipnat_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ipnat_rules=&amp;quot;/etc/ipnat.rules&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.1&amp;quot;&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* reboot. Confirm new kernel is loaded&lt;br /&gt;
&lt;br /&gt;
 uname -a&lt;br /&gt;
&lt;br /&gt;
* update ports:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports&lt;br /&gt;
echo &amp;quot;*default host=cvsup4.FreeBSD.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_6_4\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
ports-all tag=.&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup; mail -s &#039;cvs sup ports done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Install raid mgmt tool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/local/sbin&lt;br /&gt;
fetch http://3ware.com/download/Escalade9690SA-Series/9.5.3/tw_cli-freebsd-x86-9.5.3.tgz&lt;br /&gt;
tar xzf tw_cli-freebsd-x86-9.5.3.tgz&lt;br /&gt;
rm tw_cli-freebsd-x86-9.5.3.tgz&lt;br /&gt;
chmod 0700 tw_cli&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test:&lt;br /&gt;
 ./tw_cli info c0&lt;br /&gt;
&lt;br /&gt;
* install rsync from ports&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/net/rsync&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
* install perl from ports&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/lang/perl5.8&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* install screen from ports&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/sysutils/screen&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* install bb client&lt;br /&gt;
&amp;lt;pre&amp;gt;adduser&lt;br /&gt;
Username: bb&lt;br /&gt;
Full name: bb&lt;br /&gt;
Uid (Leave empty for default): 1984&lt;br /&gt;
Login group [bb]:&lt;br /&gt;
Login group is bb. Invite bb into other groups? []:&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]: &lt;br /&gt;
Home directory [/home/bb]:&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: yes&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : bb&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : bb&lt;br /&gt;
Uid        : 1984&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : bb&lt;br /&gt;
Home       : /home/bb&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): yes&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb&lt;br /&gt;
scp backup2:/mnt/data4/build/bb/bb-freebsd.tar .&lt;br /&gt;
tar xvf bb-freebsd.tar&amp;lt;/pre&amp;gt;&lt;br /&gt;
edit /home/bb/bbc1.9e-btf/etc/bb-hosts with something like:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
10.1.2.1 nat2.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
vi /home/bb/bbc1.9e-btf/ext/openfiles &lt;br /&gt;
MACHINE=&amp;quot;nat2,johncompanies,com&amp;quot;      # HAS TO BE IN A,B,C FORM&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb/bbc1.9e-btf/etc&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&lt;br /&gt;
make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
&lt;br /&gt;
vi /usr/home/bb/bbc1.9e-btf/bin/bb-cpu.sh&lt;br /&gt;
        $1 $TOPARGS &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
#        /usr/local/jail/bin/jtop &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
echo &#039;su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&#039; &amp;gt; /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
chmod +x /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Punch a hole in the firewall to allow it to communicate with bb monitor (probably already exists):&lt;br /&gt;
&lt;br /&gt;
 ipfw add 96 allow ip from 66.181.18.0/27 to 69.55.230.2&lt;br /&gt;
&lt;br /&gt;
* configure bb on mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /usr/home/bb/bbsrc/bb1.9e-btf/etc/bb-hosts&lt;br /&gt;
64.163.14.48 nat2.johncompanies.com # ssh&lt;br /&gt;
&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
bbsrc/bb/runbb.sh restart ; exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* configure ntp&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;server 69.55.230.2&lt;br /&gt;
server 0.pool.ntp.org&lt;br /&gt;
server 1.pool.ntp.org&lt;br /&gt;
server 2.pool.ntp.org&lt;br /&gt;
server 3.pool.ntp.org&lt;br /&gt;
server 66.187.233.4&lt;br /&gt;
server 217.204.76.170&lt;br /&gt;
server 64.112.189.11&lt;br /&gt;
server 66.69.112.130&lt;br /&gt;
server 80.85.129.25&lt;br /&gt;
server 80.237.234.15&lt;br /&gt;
server 130.60.7.44&lt;br /&gt;
server 134.99.176.3&lt;br /&gt;
server 198.144.202.250&lt;br /&gt;
server 202.74.170.194&lt;br /&gt;
server 204.17.42.199&lt;br /&gt;
server 204.87.183.6&lt;br /&gt;
server 213.15.3.1&lt;br /&gt;
server 213.239.178.33&lt;br /&gt;
server 217.114.97.97&lt;br /&gt;
server 69.55.230.2&amp;quot; &amp;gt; /etc/ntp.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/sbin/ntpd -A -p /var/run/ntpd.pid &lt;br /&gt;
sleep 2; ntpq -p&amp;lt;/pre&amp;gt;&lt;br /&gt;
(confirm it’s able to reach our time server)&lt;br /&gt;
&lt;br /&gt;
 echo &#039;/usr/sbin/ntpd -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift&#039; &amp;gt; /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
 chmod 0700 /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
&lt;br /&gt;
* fwd and reverse lookups on ns1c&lt;br /&gt;
 vr johncompanies.com&lt;br /&gt;
 (edit the PTR too)&lt;br /&gt;
&lt;br /&gt;
* setup backups, nfs mount&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkdir /backup3&lt;br /&gt;
echo &#039;backup3:/data           /backup3        nfs     rw,bg           0       0&#039; &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
echo &#039;#\!/bin/sh\&lt;br /&gt;
backupdir=/data/nat2/current\&lt;br /&gt;
\&lt;br /&gt;
## ENTRY /etc &#039; &amp;gt; /usr/local/etc/backup.config&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on backup3:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
 ssh backup3 mkdir -p /data/nat2/current&lt;br /&gt;
&lt;br /&gt;
on backup3, add the system to &lt;br /&gt;
 vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;scp 69.55.230.2:/usr/local/etc/rsync.backup /usr/local/etc/rsync.backup &lt;br /&gt;
vi /usr/local/etc/rsync.backup &lt;br /&gt;
backup1 &amp;gt; backup3&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 crontab -e&lt;br /&gt;
 1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
&lt;br /&gt;
* edit sshd_config for security&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ssh/sshd_config&lt;br /&gt;
ListenAddress 66.181.18.1&lt;br /&gt;
ListenAddress 69.55.229.2&lt;br /&gt;
ListenAddress 10.1.2.1&lt;br /&gt;
&lt;br /&gt;
kill -1 `cat /var/run/sshd.pid`&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* raid chk&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/local/sbin/lsiraidchk&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
my @out = split &amp;quot;\n&amp;quot;, `megarc -ldInfo -a0 -Lall|grep Status:`;&lt;br /&gt;
&lt;br /&gt;
foreach (@out) {&lt;br /&gt;
    if ($_ =~ /DEGRADED/) { $date = `date`; chomp $date; `echo &amp;quot;$date: RAID ARRAY DEGRADED&amp;quot; &amp;gt;&amp;gt; /var/log/messages`; }&lt;br /&gt;
#print $_;&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* netflow stuff&lt;br /&gt;
add crontab entries&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl; &lt;br /&gt;
*/5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
&lt;br /&gt;
#10 0 * * * rm /var/spool/clientmqueue/*&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;scp /etc/makefwrules.pl user@64.163.14.48:~&lt;br /&gt;
scp /etc/makepiperules.pl user@64.163.14.48:~&lt;br /&gt;
mv /home/user/makefwrules.pl /etc&lt;br /&gt;
mv /home/user/makepiperules.pl /etc&lt;br /&gt;
touch /etc/firewall.sh&lt;br /&gt;
mkdir /etc/oldrules/&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
other binaries&lt;br /&gt;
	&lt;br /&gt;
&amp;lt;pre&amp;gt;scp /usr/local/bin/rulemaker user@64.163.14.48:~&lt;br /&gt;
mv ~user/rulemaker /usr/local/sbin&lt;br /&gt;
scp ~user/Sendmail.pm user@64.163.14.48:~&lt;br /&gt;
scp ~user/doswatch.pl user@64.163.14.48:~&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* add nat rules&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ipnat.rules&lt;br /&gt;
# sample entry&lt;br /&gt;
bimap fxp0 10.1.6.70/32 -&amp;gt; 10.1.6.59/32&lt;br /&gt;
#bimap fxp0 10.1.4.208/32 -&amp;gt; 69.55.233.196/32&lt;br /&gt;
&lt;br /&gt;
ipnat -C -f /etc/ipnat.rules&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* shell for user&lt;br /&gt;
&amp;lt;pre&amp;gt;cp /root/.cshrc ~user/&lt;br /&gt;
vi ~user/&amp;lt;/pre&amp;gt;&lt;br /&gt;
change # to $&lt;br /&gt;
&lt;br /&gt;
* mrtg&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/net-mgmt/mrtg&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
(no FONTCONFIG, v3)&lt;br /&gt;
&lt;br /&gt;
this didn&#039;t work cause of libtool incompat&lt;br /&gt;
&lt;br /&gt;
so manually moved files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;scp /usr/local/bin/cfgmaker user@nat2:/usr/local/bin/cfgmaker&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/MRTG_lib.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/SNMP_util.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/BER.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/SNMP_Session.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/&lt;br /&gt;
scp /usr/local/bin/mrtg root@nat2:/usr/local/bin/mrtg&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/locales_mrtg.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/locales_mrtg.pm&lt;br /&gt;
scp /usr/local/bin/rrdtool root@nat2:/usr/local/bin/rrdtool&lt;br /&gt;
scp /usr/local/lib/perl5/site_perl/5.6.1/mach/RRDs.pm root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/mach/RRDs.pm&lt;br /&gt;
rsync -av /usr/local/lib/perl5/site_perl/5.6.1/mach/auto/RRDs/ root@nat2:/usr/local/lib/perl5/site_perl/5.8.9/mach/auto/RRDs/&lt;br /&gt;
scp /usr/lib/libz.so.2 root@nat2:/usr/lib/libz.so.2&lt;br /&gt;
scp /usr/lib/libm.so.2 root@nat2:/usr/lib/libm.so.2&lt;br /&gt;
rsync -av /usr/local/lib/librrd* root@nat2:/usr/local/lib/&lt;br /&gt;
scp /usr/lib/libc.so.4 root@nat2:/usr/lib/libc.so.4&lt;br /&gt;
&lt;br /&gt;
rsync -av /usr/ports/net/rrdtool root@nat2:/usr/ports/net&lt;br /&gt;
cd /usr/ports/net/rrdtool&lt;br /&gt;
make install&lt;br /&gt;
&lt;br /&gt;
mkdir -p /mnt/data1/mrtg/data&lt;br /&gt;
scp /usr/local/www/mgmt/mrtg/template.pl root@nat2:/mnt/data1/mrtg/&lt;br /&gt;
scp /usr/local/www/mgmt/mrtg/host.pl root@nat2:/mnt/data1/mrtg/&lt;br /&gt;
&lt;br /&gt;
cfgmaker --if-template=template.pl --show-op-down --global &amp;quot;options[_]: growright,bits&amp;quot; --global &#039;WorkDir: /mnt/data1/mrtg/data&#039; --global &#039;Interval: 1&#039; --global &#039;LogFormat: rrdtool&#039; --global &#039;PathAdd: /usr/local/bin&#039; --global &#039;LibAdd: /usr/local/lib&#039; --host-template=host.pl jc292401@10.1.2.50 --output=switch-p20.cfg&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /mnt/data1/mrtg/mrtg.sh&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
/usr/local/bin/mrtg /mnt/data1/mrtg/switch-p20.cfg&lt;br /&gt;
&lt;br /&gt;
chmod 0700 /mnt/data1/mrtg/mrtg.sh&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
* * * * * /mnt/data1/mrtg/mrtg.sh 2&amp;gt;&amp;amp;1 &amp;gt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* snmp firewall block&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/local/etc/rc.d/boot.sh&lt;br /&gt;
ipfw add 10 allow udp from 69.55.230.2 to any 161&lt;br /&gt;
ipfw add 10 allow udp from 10.1.2.1 to any 161&lt;br /&gt;
ipfw add 11 deny udp from any to any 161&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/boot.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= bwdb =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine tracks and stores network traffic (netflow) at castle. It is our means to monitor customer bandwidth usage.&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 4.10 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.203 There are 2 onboard nic&#039;s, one of which is the &amp;quot;listener&amp;quot;&lt;br /&gt;
* Hardware: Custom 1U. Single power supply. &lt;br /&gt;
* Drives: one 250 GB (2 x 250GB) RAID1 array running on a Promise IDE RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* netflow&lt;br /&gt;
* mysql&lt;br /&gt;
* bigbrother &lt;br /&gt;
* snmp&lt;br /&gt;
&lt;br /&gt;
== netflow ==&lt;br /&gt;
&lt;br /&gt;
The main function of this server is to run netflow on an eth device in promiscuous mode so as to hear everything happening on the port (wherein all network traffic is mirrored to that port via the cisco swith). Every 15min, it creates a flow file under &amp;lt;tt&amp;gt;/usr/home/flows/&amp;lt;/tt&amp;gt; (organized by date). The flow file contains all traffic data for a 15min increment of time.&lt;br /&gt;
&lt;br /&gt;
A cronjob moves that flow file (or files if there are multiple due to some delay) &lt;br /&gt;
 1,16,31,46 * * * * /usr/home/flowbin/queue.pl&lt;br /&gt;
&lt;br /&gt;
into a processing queue:&lt;br /&gt;
&amp;lt;tt&amp;gt;/usr/home/working&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then a separate file processes whatever flow files it finds there, inserting the data into the local mysql database:&lt;br /&gt;
 2,17,32,47 * * * * /usr/home/flowbin/processflows.pl&lt;br /&gt;
&lt;br /&gt;
== mysql ==&lt;br /&gt;
&lt;br /&gt;
The database storing all the traffic data is named &amp;lt;tt&amp;gt;traffic&amp;lt;/tt&amp;gt;&lt;br /&gt;
Tables:&lt;br /&gt;
&amp;lt;pre&amp;gt;mysql&amp;gt; show tables;&lt;br /&gt;
+---------------------------+&lt;br /&gt;
| Tables_in_traffic         |&lt;br /&gt;
+---------------------------+&lt;br /&gt;
| dailyIpTotals_69_55_224   |&lt;br /&gt;
| dailyIpTotals_69_55_225   |&lt;br /&gt;
| dailyIpTotals_69_55_226   |&lt;br /&gt;
| dailyIpTotals_69_55_227   |&lt;br /&gt;
| dailyIpTotals_69_55_228   |&lt;br /&gt;
| dailyIpTotals_69_55_229   |&lt;br /&gt;
| dailyIpTotals_69_55_230   |&lt;br /&gt;
| dailyIpTotals_69_55_231   |&lt;br /&gt;
| dailyIpTotals_69_55_232   |&lt;br /&gt;
| dailyIpTotals_69_55_233   |&lt;br /&gt;
| dailyIpTotals_69_55_234   |&lt;br /&gt;
| dailyIpTotals_69_55_235   |&lt;br /&gt;
| dailyIpTotals_69_55_236   |&lt;br /&gt;
| dailyIpTotals_69_55_237   |&lt;br /&gt;
| dailyIpTotals_69_55_238   |&lt;br /&gt;
| dailyIpTotals_69_55_239   |&lt;br /&gt;
| dailyPortTotals_69_55_224 |&lt;br /&gt;
| dailyPortTotals_69_55_225 |&lt;br /&gt;
| dailyPortTotals_69_55_226 |&lt;br /&gt;
| dailyPortTotals_69_55_227 |&lt;br /&gt;
| dailyPortTotals_69_55_228 |&lt;br /&gt;
| dailyPortTotals_69_55_229 |&lt;br /&gt;
| dailyPortTotals_69_55_230 |&lt;br /&gt;
| dailyPortTotals_69_55_231 |&lt;br /&gt;
| dailyPortTotals_69_55_232 |&lt;br /&gt;
| dailyPortTotals_69_55_233 |&lt;br /&gt;
| dailyPortTotals_69_55_234 |&lt;br /&gt;
| dailyPortTotals_69_55_235 |&lt;br /&gt;
| dailyPortTotals_69_55_236 |&lt;br /&gt;
| dailyPortTotals_69_55_237 |&lt;br /&gt;
| dailyPortTotals_69_55_238 |&lt;br /&gt;
| dailyPortTotals_69_55_239 |&lt;br /&gt;
| ipTotals_69_55_224        |&lt;br /&gt;
| ipTotals_69_55_225        |&lt;br /&gt;
| ipTotals_69_55_226        |&lt;br /&gt;
| ipTotals_69_55_227        |&lt;br /&gt;
| ipTotals_69_55_228        |&lt;br /&gt;
| ipTotals_69_55_229        |&lt;br /&gt;
| ipTotals_69_55_230        |&lt;br /&gt;
| ipTotals_69_55_231        |&lt;br /&gt;
| ipTotals_69_55_232        |&lt;br /&gt;
| ipTotals_69_55_233        |&lt;br /&gt;
| ipTotals_69_55_234        |&lt;br /&gt;
| ipTotals_69_55_235        |&lt;br /&gt;
| ipTotals_69_55_236        |&lt;br /&gt;
| ipTotals_69_55_237        |&lt;br /&gt;
| ipTotals_69_55_238        |&lt;br /&gt;
| ipTotals_69_55_239        |&lt;br /&gt;
| portTotals_69_55_224      |&lt;br /&gt;
| portTotals_69_55_225      |&lt;br /&gt;
| portTotals_69_55_226      |&lt;br /&gt;
| portTotals_69_55_227      |&lt;br /&gt;
| portTotals_69_55_228      |&lt;br /&gt;
| portTotals_69_55_229      |&lt;br /&gt;
| portTotals_69_55_230      |&lt;br /&gt;
| portTotals_69_55_231      |&lt;br /&gt;
| portTotals_69_55_232      |&lt;br /&gt;
| portTotals_69_55_233      |&lt;br /&gt;
| portTotals_69_55_234      |&lt;br /&gt;
| portTotals_69_55_235      |&lt;br /&gt;
| portTotals_69_55_236      |&lt;br /&gt;
| portTotals_69_55_237      |&lt;br /&gt;
| portTotals_69_55_238      |&lt;br /&gt;
| portTotals_69_55_239      |&lt;br /&gt;
+---------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So as you see we store each class-C block in its own table, for efficiency. Further, we store and organize data in 4 ways: &amp;quot;daily&amp;quot; tables and 15-minute granularity tables, and for each of those we track simple IP traffic and port-specific traffic. The daily tables contains 2 entries (one for each direction) for each IP for each day. For the current day, the row data is incremented as the day goes on. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mysql&amp;gt; describe dailyIpTotals_69_55_224;&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
| Field     | Type        | Null | Key | Default | Extra |&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
| id        | varchar(23) |      | PRI |         |       |&lt;br /&gt;
| date      | date        | YES  |     | NULL    |       |&lt;br /&gt;
| ip        | varchar(15) | YES  | MUL | NULL    |       |&lt;br /&gt;
| direction | tinyint(1)  | YES  |     | NULL    |       |&lt;br /&gt;
| octets    | bigint(12)  | YES  |     | NULL    |       |&lt;br /&gt;
| packets   | int(11)     | YES  |     | NULL    |       |&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from dailyIpTotals_69_55_224 limit 1\G&lt;br /&gt;
*************************** 1. row ***************************&lt;br /&gt;
       id: 6955224194-20100917-1&lt;br /&gt;
     date: 2010-09-17&lt;br /&gt;
       ip: 69.55.224.194&lt;br /&gt;
direction: 1&lt;br /&gt;
   octets: 8821&lt;br /&gt;
  packets: 91&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;id&amp;lt;/tt&amp;gt; is a unique identifier (key), &amp;lt;tt&amp;gt;direction&amp;lt;/tt&amp;gt; indicates incoming or outgoing traffic (outbound = 2, inbound = 1), &amp;lt;tt&amp;gt;octets&amp;lt;/tt&amp;gt; are the amount of traffic in kilobytes, and &amp;lt;tt&amp;gt;packets&amp;lt;/tt&amp;gt; is the total number of packets.&lt;br /&gt;
&lt;br /&gt;
The 15-minute table has similar information, but it&#039;s organized in 15 minute increments:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mysql&amp;gt; describe ipTotals_69_55_224;&lt;br /&gt;
+-----------+------------+------+-----+---------+-------+&lt;br /&gt;
| Field     | Type       | Null | Key | Default | Extra |&lt;br /&gt;
+-----------+------------+------+-----+---------+-------+&lt;br /&gt;
| date      | datetime   | YES  |     | NULL    |       |&lt;br /&gt;
| ip        | char(15)   | YES  | MUL | NULL    |       |&lt;br /&gt;
| direction | tinyint(1) | YES  |     | NULL    |       |&lt;br /&gt;
| octets    | bigint(20) | YES  |     | NULL    |       |&lt;br /&gt;
| packets   | int(11)    | YES  |     | NULL    |       |&lt;br /&gt;
+-----------+------------+------+-----+---------+-------+&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from ipTotals_69_55_224 limit 2\G&lt;br /&gt;
*************************** 1. row ***************************&lt;br /&gt;
     date: 2010-01-11 19:30:00&lt;br /&gt;
       ip: 69.55.224.13&lt;br /&gt;
direction: 1&lt;br /&gt;
   octets: 288&lt;br /&gt;
  packets: 6&lt;br /&gt;
*************************** 2. row ***************************&lt;br /&gt;
     date: 2010-01-11 19:30:00&lt;br /&gt;
       ip: 69.55.224.12&lt;br /&gt;
direction: 1&lt;br /&gt;
   octets: 216&lt;br /&gt;
  packets: 4&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So for a given IP, there will be 192 rows in a given day: 4 rows per hour, *2 for 2 directions, *24 for 24hours in a day. Obviously this table is large which is why we broke it down into a daily table for quick, easy, daily-summary access.&lt;br /&gt;
&lt;br /&gt;
That covers the simple traffic tabulation tables. We also track traffic by port:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mysql&amp;gt; describe dailyPortTotals_69_55_224;&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
| Field     | Type        | Null | Key | Default | Extra |&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
| id        | varchar(28) |      | PRI |         |       |&lt;br /&gt;
| date      | date        | YES  |     | NULL    |       |&lt;br /&gt;
| ip        | varchar(15) | YES  | MUL | NULL    |       |&lt;br /&gt;
| direction | tinyint(1)  | YES  |     | NULL    |       |&lt;br /&gt;
| protocol  | smallint(3) | YES  |     | NULL    |       |&lt;br /&gt;
| port      | int(11)     | YES  |     | NULL    |       |&lt;br /&gt;
| octets    | bigint(11)  | YES  |     | NULL    |       |&lt;br /&gt;
| packets   | int(11)     | YES  |     | NULL    |       |&lt;br /&gt;
+-----------+-------------+------+-----+---------+-------+&lt;br /&gt;
8 rows in set (0.00 sec)&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from dailyPortTotals_69_55_224 limit 1\G&lt;br /&gt;
*************************** 1. row ***************************&lt;br /&gt;
       id: 695522496-20091218-1-6-23&lt;br /&gt;
     date: 2009-12-18&lt;br /&gt;
       ip: 69.55.224.96&lt;br /&gt;
direction: 1&lt;br /&gt;
 protocol: 6&lt;br /&gt;
     port: 23&lt;br /&gt;
   octets: 1796&lt;br /&gt;
  packets: 30&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from portTotals_69_55_224 limit 1\G&lt;br /&gt;
*************************** 1. row ***************************&lt;br /&gt;
     date: 2010-09-07 18:45:00&lt;br /&gt;
       ip: 69.55.224.254&lt;br /&gt;
direction: 1&lt;br /&gt;
 protocol: 6&lt;br /&gt;
     port: 99999&lt;br /&gt;
   octets: 144&lt;br /&gt;
  packets: 3&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is largely the same with 2 more additions: &amp;lt;tt&amp;gt;protocol&amp;lt;/tt&amp;gt; (1=ICMP, 6=TCP, 17=UDP), and &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt; which we set to 99999 if the traffic is return traffic and the port is above 1024. Obviously the potential for number of rows grows quickly when you consider the addition of port and protocol tracking per IP.&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Adaptec_Controllers|Check RAID array]]&lt;br /&gt;
* archive data from database&lt;br /&gt;
 archive_daily.pl 2012 09&lt;br /&gt;
This will archive data for the given year and month from the daily summary tables. Generally we want to have a year of history in the database.&lt;br /&gt;
&lt;br /&gt;
 archive_15min.pl 2012 09&lt;br /&gt;
This will archive data for the given year and month from the 15min-increment tables. Generally, we want to have 6 months of history in the database.&lt;br /&gt;
&lt;br /&gt;
* if space becomes tight, move flow files and exported data to a backup server, both located in &amp;lt;tt&amp;gt;/usr/home/flowbin/archive&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/usr/home/exported&amp;lt;/tt&amp;gt;, respectively&lt;br /&gt;
&lt;br /&gt;
== Slaving ==&lt;br /&gt;
&lt;br /&gt;
If we were going to setup traffic database slaving (we don&#039;t do this anymore), perhaps cause the bwdb machine gets busy and it cannot handle traffic requests and netflow, here&#039;s how it&#039;s done:&lt;br /&gt;
&lt;br /&gt;
On the traffic master:&lt;br /&gt;
&lt;br /&gt;
 GRANT REPLICATION SLAVE ON *.* TO &#039;repl&#039;@&#039;10.1.4.8&#039; IDENTIFIED BY &#039;qERUG8wf&#039;;&lt;br /&gt;
&lt;br /&gt;
in my.cnf:&lt;br /&gt;
&amp;lt;pre&amp;gt;bin-log&lt;br /&gt;
server-id=1&lt;br /&gt;
max_binlog_size=500M&lt;br /&gt;
expire_logs_days = 3&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on slave:&lt;br /&gt;
in my.cnf:&lt;br /&gt;
&amp;lt;pre&amp;gt;server-id       = 2&lt;br /&gt;
master-host     =   10.1.4.203&lt;br /&gt;
master-user     =   repl&lt;br /&gt;
master-password =   qERUG8wf&lt;br /&gt;
master-connect-retry=60&lt;br /&gt;
replicate-wild-do-table=traffic.daily%&lt;br /&gt;
max_relay_log_size=500M&lt;br /&gt;
expire_logs_days = 3&lt;br /&gt;
&lt;br /&gt;
replicate-wild-do-table=traffic.%&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on master:&lt;br /&gt;
 touch /usr/home/working/.lock&lt;br /&gt;
(make sure processflows not running)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;FLUSH TABLES WITH READ LOCK;&lt;br /&gt;
cd /usr/home/database/traffic&lt;br /&gt;
tar -czf mysql-traffic-snapshot.tgz ./daily*&lt;br /&gt;
(~1G)&lt;br /&gt;
SHOW MASTER STATUS;&lt;br /&gt;
+-----------------+-----------+--------------+------------------+&lt;br /&gt;
| File            | Position  | Binlog_Do_DB | Binlog_Ignore_DB |&lt;br /&gt;
+-----------------+-----------+--------------+------------------+&lt;br /&gt;
| bwdb-bin.000039 | 154432615 |              |                  |&lt;br /&gt;
+-----------------+-----------+--------------+------------------+&lt;br /&gt;
&lt;br /&gt;
(write down info)&lt;br /&gt;
UNLOCK TABLES;&lt;br /&gt;
scp mysql-traffic-snapshot.tgz 10.1.4.5:/mnt/data1/db/mysql/traffic/&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on slave:&lt;br /&gt;
&amp;lt;pre&amp;gt;mkdir /mnt/data1/db/mysql/traffic&lt;br /&gt;
cd /mnt/data1/db/mysql/traffic/&lt;br /&gt;
tar xzvf mysql-traffic-snapshot.tgz&lt;br /&gt;
(restart mysql)&lt;br /&gt;
CHANGE MASTER TO MASTER_HOST=&#039;10.1.4.203&#039;,MASTER_USER=&#039;repl&#039;,MASTER_PASSWORD=&#039;qERUG8wf&#039;,MASTER_LOG_FILE=&#039;bwdb-bin.000059&#039;,MASTER_LOG_POS=482502186;&lt;br /&gt;
START SLAVE;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/home/database/traffic&lt;br /&gt;
scp *&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;optimize table dailyPortTotals_69_55_224;&lt;br /&gt;
optimize table dailyPortTotals_69_55_225;&lt;br /&gt;
optimize table dailyPortTotals_69_55_226;&lt;br /&gt;
optimize table dailyPortTotals_69_55_227;&lt;br /&gt;
optimize table dailyPortTotals_69_55_228;&lt;br /&gt;
optimize table dailyPortTotals_69_55_229;&lt;br /&gt;
optimize table dailyPortTotals_69_55_230;&lt;br /&gt;
optimize table dailyPortTotals_69_55_231;&lt;br /&gt;
optimize table dailyPortTotals_69_55_232;&lt;br /&gt;
optimize table dailyPortTotals_69_55_233;&lt;br /&gt;
optimize table dailyPortTotals_69_55_234;&lt;br /&gt;
optimize table dailyPortTotals_69_55_235;&lt;br /&gt;
optimize table dailyPortTotals_69_55_236;&lt;br /&gt;
optimize table dailyPortTotals_69_55_237;&lt;br /&gt;
optimize table dailyPortTotals_69_55_238;&lt;br /&gt;
optimize table dailyPortTotals_69_55_239;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&lt;br /&gt;
=== BIOS Config ===&lt;br /&gt;
disable quiet boot&lt;br /&gt;
&lt;br /&gt;
set to last state after power loss&lt;br /&gt;
&lt;br /&gt;
set date/time to GMT&lt;br /&gt;
&lt;br /&gt;
enable serial console output (baud rate 115200)&lt;br /&gt;
&lt;br /&gt;
=== Install OS ===&lt;br /&gt;
&lt;br /&gt;
Install FreeBSD 8.3 amd64&lt;br /&gt;
&lt;br /&gt;
* partition map:&lt;br /&gt;
&amp;lt;pre&amp;gt;/ 500m&lt;br /&gt;
swap 4096m&lt;br /&gt;
/var 256m&lt;br /&gt;
/tmp 256m&lt;br /&gt;
/usr ~&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* edit /etc/make.conf &lt;br /&gt;
Castle:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=bwdb \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
i2b:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=bwdb2 \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* add settings to /boot/loader.conf and /boot.config&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;-Dh&amp;quot; &amp;gt;&amp;gt; /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &#039;console=&amp;quot;comconsole,vidconsole&amp;quot; \&lt;br /&gt;
boot_multicons=&amp;quot;YES&amp;quot; \&lt;br /&gt;
boot_serial=&amp;quot;YES&amp;quot; \&lt;br /&gt;
comconsole_speed=&amp;quot;115200&amp;quot;&#039; &amp;gt;&amp;gt; /boot/loader.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* turn off all ttyv&#039;s except 0 and 1 in /etc/ttys&lt;br /&gt;
also turn on ttyu0, change type to vt100:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ttys&lt;br /&gt;
&lt;br /&gt;
ttyv2   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv3   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv4   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv5   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv6   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv7   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
# Serial terminals&lt;br /&gt;
# The &#039;dialup&#039; keyword identifies dialin lines to login, fingerd etc.&lt;br /&gt;
ttyu0   &amp;quot;/usr/libexec/getty std.9600&amp;quot;   vt100   on secure&lt;br /&gt;
&lt;br /&gt;
kill -1 1&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
on console server:&lt;br /&gt;
 vi /etc/remote&lt;br /&gt;
(rename port to jail8 depending on where and which digi plugged into)&lt;br /&gt;
test serial console&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* populate hosts&lt;br /&gt;
i2b:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
castle:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;10.1.4.3 backup2 backup2.johncompanies.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.4.8 backup1 backup1.johncompanies.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.4.4 mail mail.johncompanies.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* put key in authorized_keys on backup1 and  backup2&lt;br /&gt;
 cd&lt;br /&gt;
 ssh-keygen -t dsa -b 1024 &lt;br /&gt;
(default location, leave password blank)&lt;br /&gt;
&lt;br /&gt;
castle:&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
i2b:&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh backup3 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
confirm that you can ssh to backup2 and backup1 (and backup3 if at i2b) without getting a login prompt&lt;br /&gt;
&lt;br /&gt;
 ssh backup1 hostname&lt;br /&gt;
 ssh backup2 hostname&lt;br /&gt;
&lt;br /&gt;
* edit root&#039;s path and login script:&lt;br /&gt;
 vi /root/.cshrc&lt;br /&gt;
&lt;br /&gt;
Change alias entries (add G):&lt;br /&gt;
&amp;lt;pre&amp;gt;alias la        ls -aG&lt;br /&gt;
alias lf        ls -FAG&lt;br /&gt;
alias ll        ls -lAG&lt;br /&gt;
alias ls        ls -AG&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and alter the prompt, set the following:&lt;br /&gt;
 set prompt = &amp;quot;`/bin/hostname -s` %/# &amp;quot;&lt;br /&gt;
&lt;br /&gt;
* install cvsup&lt;br /&gt;
 cd /usr/ports/net/cvsup-without-gui &lt;br /&gt;
 make install clean; rehash; mail -s &#039;cvs installed&#039; support@johncompanies.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
* get latest sources for this release:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src &lt;br /&gt;
echo &amp;quot;*default host=cvsup4.freebsd.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_8_3\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
src-all&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup ; mail -s &#039;cvs sup done&#039; support@johncompanies.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* configure new kernel&lt;br /&gt;
&lt;br /&gt;
 cd /usr/src/sys/amd64/conf &lt;br /&gt;
 scp backup2:/mnt/data4/build/freebsd/kern_config-bwdb-8.3-amd64 ./bwdb&lt;br /&gt;
&lt;br /&gt;
Edit config and change name:&lt;br /&gt;
 vi bwdb&lt;br /&gt;
 ident  bwdb&lt;br /&gt;
&lt;br /&gt;
* build, install kernel and world&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /boot&lt;br /&gt;
&lt;br /&gt;
mv kernel kernel.GENERIC&lt;br /&gt;
cd kernel.GENERIC&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make buildkernel installkernel&lt;br /&gt;
&lt;br /&gt;
make buildworld ; mail -s &#039;buildworld done&#039; support@johncompanies.com &amp;lt; /dev/null&lt;br /&gt;
(2450: 1:56min, supermicro: 59mins, 2950: 38mins)&lt;br /&gt;
make installworld &lt;br /&gt;
(2450: 3min, supermicro: 1min, 2950: :34)&lt;br /&gt;
mergemaster -i&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* populate /etc/rc.conf with IPs and NFS settings&lt;br /&gt;
&lt;br /&gt;
castle:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
hostname=&amp;quot;bwdb.johncompanies.com&amp;quot;&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.4.203 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;up promisc&amp;quot;&lt;br /&gt;
defaultrouter=&amp;quot;10.1.4.1&amp;quot;&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.4.203&amp;quot;&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ipfw_load=&amp;quot;YES&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
i2b:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
hostname=&amp;quot;bwdb2.johncompanies.com&amp;quot;&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.4 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;up promisc&amp;quot;&lt;br /&gt;
defaultrouter=&amp;quot;10.1.2.1&amp;quot;&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.4&amp;quot;&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ipfw_load=&amp;quot;YES&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* reboot. Confirm new kernel is loaded&lt;br /&gt;
&lt;br /&gt;
 uname -a&lt;br /&gt;
&lt;br /&gt;
* update ports:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports&lt;br /&gt;
echo &amp;quot;*default host=cvsup4.FreeBSD.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_8_3\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
ports-all tag=.&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup; mail -s &#039;cvs sup ports done&#039; support@johncompanies.com &amp;lt; /dev/null&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Install raid mgmt tool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/local/sbin&lt;br /&gt;
scp backup2:/d4/build/3ware/tw_cli-freebsd-x86_64-9.5.0.1.tgz .&lt;br /&gt;
tar xzf tw_cli-freebsd-x86_64-9.5.0.1.tgz&lt;br /&gt;
rm tw_cli-freebsd-x86_64-9.5.0.1.tgz&lt;br /&gt;
chmod 0700 tw_cli&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Test:&lt;br /&gt;
 ./tw_cli info c0&lt;br /&gt;
&lt;br /&gt;
Grab raid check script:&lt;br /&gt;
 scp backup1:/usr/local/sbin/3wraidchk /usr/local/etc&lt;br /&gt;
&lt;br /&gt;
Setup cronjob:&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
*/5 * * * * /usr/local/etc/3wraidchk&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* install rsync from ports&lt;br /&gt;
 cd /usr/ports/net/rsync&lt;br /&gt;
 make install clean&lt;br /&gt;
&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
* install perl from ports&lt;br /&gt;
 cd /usr/ports/lang/perl5.8&lt;br /&gt;
 make install clean&lt;br /&gt;
&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
* install bb client&lt;br /&gt;
&lt;br /&gt;
Compiling from source on AMD64 will not work. So, we use a linux-compiled version and rely on linux compat. Linux compat won&#039;t install on 8.x - libtool 2.4 need. So, instead we copy(ed) over linux: &lt;br /&gt;
 rsync -aSHv --exclude=proc --exclude=sys 10.1.4.108:/usr/compat/linux/ /usr/compat/linux/&lt;br /&gt;
&lt;br /&gt;
 adduser&lt;br /&gt;
&lt;br /&gt;
Output/response:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Username: bb&lt;br /&gt;
Full name: bb&lt;br /&gt;
Uid (Leave empty for default): 1984&lt;br /&gt;
Login group [bb]:&lt;br /&gt;
Login group is bb. Invite bb into other groups? []:&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]: &lt;br /&gt;
Home directory [/home/bb]:&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: yes&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : bb&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : bb&lt;br /&gt;
Uid        : 1984&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : bb&lt;br /&gt;
Home       : /home/bb&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): yes&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 cd /usr/home/bb&lt;br /&gt;
 scp backup2:/mnt/data4/build/bb/bb-freebsd_linuxcompat.tgz .&lt;br /&gt;
 tar xzf bb-freebsd_linuxcompat.tgz&lt;br /&gt;
&lt;br /&gt;
edit /home/bb/bbc1.9e-btf/etc/bb-hosts with something like:&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;10.1.4.5 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
 10.1.4.203 bwdb.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
Edit for machine name and private IP.&lt;br /&gt;
&lt;br /&gt;
if this machine is at i2b:&lt;br /&gt;
 echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
 10.1.2.4 bwdb2.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /home/bb/bbc1.9e-btf/ext/openfiles &lt;br /&gt;
&lt;br /&gt;
MACHINE=&amp;quot;bwdb,johncompanies,com&amp;quot;      # HAS TO BE IN A,B,C FORM&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit for machine name.&lt;br /&gt;
&lt;br /&gt;
Have bb watch for flow-capture, mysql&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt;&amp;gt; /home/bb/bbc1.9e-btf/etc/bb-proctab&lt;br /&gt;
localhost: flow-capture :&lt;br /&gt;
localhost: mysqld :&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/home/bb/bbc1.9e-btf/etc&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put in script to start bb @ boot:&lt;br /&gt;
 echo &#039;su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&#039; &amp;gt; /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
 chmod +x /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If this is at i2b, punch a hole in the firewall to allow it to communicate with bb monitor:&lt;br /&gt;
&lt;br /&gt;
ipfw add 00096 allow tcp from 66.181.18.0/27 to 69.55.230.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* configure bb on mail&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /usr/home/bb/bbsrc/bb1.9e-btf/etc/bb-hosts&lt;br /&gt;
10.1.4.203 bwdb.johncompanies.com # ssh&lt;br /&gt;
&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
bbsrc/bb/runbb.sh restart ; exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* configure ntp server&lt;br /&gt;
Castle:&lt;br /&gt;
 echo &amp;quot;server 10.1.4.1&amp;quot; &amp;gt; /etc/ntp.conf&lt;br /&gt;
&lt;br /&gt;
I2b:&lt;br /&gt;
 echo &amp;quot;server 10.1.2.1&amp;quot; &amp;gt; /etc/ntp.conf&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;/usr/sbin/ntpd -p /var/run/ntpd.pid&lt;br /&gt;
sleep 2; ntpq -p&amp;lt;/pre&amp;gt;&lt;br /&gt;
(confirm it’s able to reach our time server)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;/usr/sbin/ntpd -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift&#039; &amp;gt; /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/ntp.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* fwd and reverse lookups on ns1c&lt;br /&gt;
vr johncompanies.com&lt;br /&gt;
 (edit the PTR too)&lt;br /&gt;
&lt;br /&gt;
* setup backups&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &#039;#\!/bin/sh\&lt;br /&gt;
backupdir=/data/bwdb/current\&lt;br /&gt;
server=backup1\&lt;br /&gt;
\&lt;br /&gt;
## ENTRY /etc\&lt;br /&gt;
## ENTRY /usr/home/flowbin\&lt;br /&gt;
## ENTRY /usr/home/database&#039; &amp;gt; /usr/local/etc/backup.config&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Castle:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
 ssh backup1 mkdir -p /data/bwdb/current&lt;br /&gt;
on backup1, add the system to &lt;br /&gt;
 vi /usr/local/sbin/snapshot_rotate&lt;br /&gt;
&lt;br /&gt;
I2b:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
 ssh backup3 mkdir -p /data/bwdb/current&lt;br /&gt;
on backup3, add the system to &lt;br /&gt;
 vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Copy over the backup script:&lt;br /&gt;
 scp backup2:/d4/bin/freebsd8.x/rsync.backup /usr/local/etc/&lt;br /&gt;
&lt;br /&gt;
Edit rsync.backup and change &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; var to point to correct config file location: &amp;lt;tt&amp;gt;/usr/local/etc/backup.config&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
5 0 * * * /usr/local/etc/rsync.backup&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* make /root/logs&lt;br /&gt;
 mkdir /root/logs&lt;br /&gt;
&lt;br /&gt;
* edit sshd_config for security&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ssh/sshd_config&lt;br /&gt;
ListenAddress 10.1.4.203&lt;br /&gt;
PermitRootLogin yes&lt;br /&gt;
&lt;br /&gt;
kill -1 `cat /var/run/sshd.pid`&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Edit for private IP.&lt;br /&gt;
&lt;br /&gt;
* snmp&lt;br /&gt;
&lt;br /&gt;
(Before doing this you may need to take down the firewall and also add to resolv.conf 69.43.143.41)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/net-mgmt/net-snmp&lt;br /&gt;
make install clean&lt;br /&gt;
(defaults)&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/rc.conf&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
snmpd_flags=&amp;quot;-a&amp;quot;&lt;br /&gt;
snmpd_conffile=&amp;quot;/usr/local/share/snmp/snmpd.conf&amp;quot;&lt;br /&gt;
snmptrapd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
snmptrapd_flags=&amp;quot;-a -p /var/run/snmptrapd.pid&amp;quot;&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/share/snmp/snmpd.conf&lt;br /&gt;
rocommunity  jcread 10.1.4.5&lt;br /&gt;
rocommunity  jcread 10.1.4.202&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== netflow ===&lt;br /&gt;
&lt;br /&gt;
Install flow tools:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/net-mgmt/flow-tools&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
Defaults.&lt;br /&gt;
&lt;br /&gt;
 mkdir /usr/home/flows&lt;br /&gt;
&lt;br /&gt;
Flow start script:&lt;br /&gt;
 echo &amp;quot;/usr/local/bin/flow-capture -w /usr/home/flows -S5 -N -2 0/10.1.4.203/4444&amp;quot; &amp;gt; /usr/local/etc/rc.d/flow-capture.sh&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/local/etc/rc.d/flow-capture.sh&lt;br /&gt;
&lt;br /&gt;
Edit for private IP.&lt;br /&gt;
&lt;br /&gt;
Netgraph start script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/local/etc/rc.d/netgraph.sh&lt;br /&gt;
&lt;br /&gt;
/usr/sbin/ngctl -f- &amp;lt;&amp;lt;-SEQ&lt;br /&gt;
mkpeer em0: netflow lower iface0&lt;br /&gt;
name em0:lower netflow&lt;br /&gt;
connect em0: netflow: upper out0&lt;br /&gt;
mkpeer netflow: ksocket export inet/dgram/udp&lt;br /&gt;
msg netflow:export connect inet/10.1.4.203:4444&lt;br /&gt;
SEQ&lt;br /&gt;
&lt;br /&gt;
#/usr/sbin/ngctl -f- &amp;lt;&amp;lt;-SEQ&lt;br /&gt;
#shutdown netflow:&lt;br /&gt;
#SEQ&lt;br /&gt;
&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/netgraph.sh&amp;lt;/pre&amp;gt;&lt;br /&gt;
Edit for private IP.&lt;br /&gt;
&lt;br /&gt;
Confirm netflow is running after running scripts:&lt;br /&gt;
&amp;lt;pre&amp;gt;newbwdb /usr/ports/net-mgmt/flow-tools# /usr/sbin/ngctl&lt;br /&gt;
Available commands:&lt;br /&gt;
  config     get or set configuration of node at &amp;lt;path&amp;gt;&lt;br /&gt;
  connect    Connects hook &amp;lt;peerhook&amp;gt; of the node at &amp;lt;relpath&amp;gt; to &amp;lt;hook&amp;gt;&lt;br /&gt;
  debug      Get/set debugging verbosity level&lt;br /&gt;
  dot        Produce a GraphViz (.dot) of the entire netgraph.&lt;br /&gt;
  help       Show command summary or get more help on a specific command&lt;br /&gt;
  list       Show information about all nodes&lt;br /&gt;
  mkpeer     Create and connect a new node to the node at &amp;quot;path&amp;quot;&lt;br /&gt;
  msg        Send a netgraph control message to the node at &amp;quot;path&amp;quot;&lt;br /&gt;
  name       Assign name &amp;lt;name&amp;gt; to the node at &amp;lt;path&amp;gt;&lt;br /&gt;
  read       Read and execute commands from a file&lt;br /&gt;
  rmhook     Disconnect hook &amp;quot;hook&amp;quot; of the node at &amp;quot;path&amp;quot;&lt;br /&gt;
  show       Show information about the node at &amp;lt;path&amp;gt;&lt;br /&gt;
  shutdown   Shutdown the node at &amp;lt;path&amp;gt;&lt;br /&gt;
  status     Get human readable status information from the node at &amp;lt;path&amp;gt;&lt;br /&gt;
  types      Show information about all installed node types&lt;br /&gt;
  write      Send a data packet down the hook named by &amp;quot;hook&amp;quot;.&lt;br /&gt;
  quit       Exit program&lt;br /&gt;
+ show netflow:&lt;br /&gt;
  Name: netflow         Type: netflow         ID: 00000004   Num hooks: 3&lt;br /&gt;
  Local hook      Peer name       Peer type    Peer ID         Peer hook&lt;br /&gt;
  ----------      ---------       ---------    -------         ---------&lt;br /&gt;
  export          &amp;lt;unnamed&amp;gt;       ksocket      00000005        inet/dgram/udp&lt;br /&gt;
  out0            em0             ether        00000001        upper&lt;br /&gt;
  iface0          em0             ether        00000001        lower&lt;br /&gt;
+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that sometimes flow-capture is failing due to swap exhaustion (even after adding more swap). So we crontab flow-capture to restart (it&#039;s ok to start if it&#039;s already running, it just quits):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
crontab -e&lt;br /&gt;
#restart flow-capture&lt;br /&gt;
*/15 * * * * /usr/local/etc/rc.d/flow-capture.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== process flow tools ==== &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mkdir /usr/home/flowbin&lt;br /&gt;
mkdir /usr/home/working&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install modules:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/devel/p5-Date-Calc&lt;br /&gt;
make install clean&lt;br /&gt;
cd /usr/ports/mail/p5-Mail-Sendmail&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Queue script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/home/flowbin/queue.pl&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
use strict;&lt;br /&gt;
&lt;br /&gt;
BEGIN {&lt;br /&gt;
    push @INC, &amp;quot;/usr/home/flowbin&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
use date;&lt;br /&gt;
&lt;br /&gt;
my $flowbase = &amp;quot;/usr/home/flows&amp;quot;;&lt;br /&gt;
#my $flowqueue = &amp;quot;/usr/home/queue&amp;quot;;&lt;br /&gt;
my $flowqueue = &amp;quot;/usr/home/working&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
my ($date, $time) = date::CurrentDateTime();&lt;br /&gt;
&lt;br /&gt;
my $flowdir = mkFlowDir($date);&lt;br /&gt;
`mv $flowdir/ft-* $flowqueue`;&lt;br /&gt;
&lt;br /&gt;
if (date::DateWindow($date, $time, $date, &amp;quot;00:00:00&amp;quot;, 600)) {&lt;br /&gt;
    my $newdate = date::AddDays($date, -1);&lt;br /&gt;
    my $flowdir = mkFlowDir($newdate);&lt;br /&gt;
    `mv $flowdir/ft-* $flowqueue`;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub mkFlowDir {&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    $date =~ /([0-9]{4}-[0-9]{2})/;&lt;br /&gt;
    my $yearmonth = $1;&lt;br /&gt;
    return &amp;quot;$flowbase/$yearmonth/$date&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Date.pm module:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/home/flowbin/date.pm&lt;br /&gt;
#!/usr/local/bin/perl&lt;br /&gt;
#&lt;br /&gt;
# $Header: /usr/cvs/newgw/lib/date.pm,v 1.2 2003/11/24 17:06:02 glenn Exp $&lt;br /&gt;
#&lt;br /&gt;
# Copyright (c) 2001, 2002, 2003&lt;br /&gt;
#      e-Monitoring Networks, Inc.  All rights reserved.&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# date.pl - Higher level functions written on top of Date::Calc&lt;br /&gt;
&lt;br /&gt;
package date;&lt;br /&gt;
&lt;br /&gt;
use strict;&lt;br /&gt;
use Date::Calc qw(:all);&lt;br /&gt;
&lt;br /&gt;
sub DayDiff { #calculate the difference in days from two dates&lt;br /&gt;
    my $date1 = shift;&lt;br /&gt;
    my $date2 = shift;&lt;br /&gt;
    my ($year1, $month1, $day1) = &amp;amp;DateToymd($date1);&lt;br /&gt;
    my ($year2, $month2, $day2) = &amp;amp;DateToymd($date2);&lt;br /&gt;
    my $diff = &amp;amp;Delta_Days($year1, $month1, $day1, $year2, $month2, $day2);&lt;br /&gt;
    return $diff;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub AddDays { #adds specified number of days to the supplied date&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my $days = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my ($nyear, $nmonth, $nday) = &amp;amp;Add_Delta_Days($year, $month, $day, $days);&lt;br /&gt;
    my $ndate = &amp;amp;ymdToDate($nyear, $nmonth, $nday);&lt;br /&gt;
    return $ndate;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub AddHours { #adds specified number of hours to the supplied date and time&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my $time = shift;&lt;br /&gt;
    my $addhours = shift;&lt;br /&gt;
    my $adddays = 0;&lt;br /&gt;
    if (abs($addhours / 24) &amp;gt;= 1) {&lt;br /&gt;
        $adddays = int($addhours / 24);&lt;br /&gt;
        $addhours -= $adddays * 24;&lt;br /&gt;
    }&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my ($hour, $minute, $second) = &amp;amp;TimeTohms($time);&lt;br /&gt;
    my ($ny, $nm, $nd, $nh, $nmin, $ns) = &amp;amp;Add_Delta_DHMS($year, $month, $day,&lt;br /&gt;
                                                          $hour, $minute, $second,&lt;br /&gt;
                                                          $adddays, $addhours, 0, 0);&lt;br /&gt;
    my $ndate = &amp;amp;ymdToDate($ny, $nm, $nd);&lt;br /&gt;
    my $ntime = &amp;amp;hmsToTime($nh, $nmin, $ns);&lt;br /&gt;
    return $ndate, $ntime;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub AddMinutes {&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my $time = shift;&lt;br /&gt;
    my $minutes = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my ($hour, $minute, $second) = &amp;amp;TimeTohms($time);&lt;br /&gt;
    my ($ny, $nm, $nd, $nh, $nmin, $ns) = &amp;amp;Add_Delta_DHMS($year, $month, $day,&lt;br /&gt;
                                                          $hour, $minute, $second,&lt;br /&gt;
                                                          0, 0, $minutes, 0);&lt;br /&gt;
    my $ndate = &amp;amp;ymdToDate($ny, $nm, $nd);&lt;br /&gt;
    my $ntime = &amp;amp;hmsToTime($nh, $nmin, $ns);&lt;br /&gt;
    return $ndate, $ntime;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub CurrentDateTime { #return the current date and time&lt;br /&gt;
    my ($y, $m, $d, $h, $min, $s, $z, $z, $z) = &amp;amp;System_Clock;&lt;br /&gt;
    my $date = &amp;amp;ymdToDate($y, $m, $d);&lt;br /&gt;
    my $time = &amp;amp;hmsToTime($h, $min, $s);&lt;br /&gt;
    return $date, $time;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub Currentymd { #return the current year, month and day as separate variables&lt;br /&gt;
    my ($y, $m, $d, $h, $min, $s, $z, $z, $z) = &amp;amp;System_Clock;&lt;br /&gt;
    return $y, $m, $d;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub DateToymd { #takes a date and returns year, month, day as individual values&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    if ($date =~ /([0-9]{4})-([0-9]{2})-([0-9]{2})/) {&lt;br /&gt;
        my $day = $3;&lt;br /&gt;
        my $month = $2;&lt;br /&gt;
        my $year = $1;&lt;br /&gt;
        return $year, $month, $day;&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub TimeTohms { #takes a time and return hours minutes and seconds as individual values&lt;br /&gt;
    my $time = shift;&lt;br /&gt;
    if ($time =~ /([0-9]{1,2}):([0-9]{1,2}):([0-9]{1,2})/) {&lt;br /&gt;
        my $hour = $1;&lt;br /&gt;
        my $minute = $2;&lt;br /&gt;
        my $second = $3;&lt;br /&gt;
        if ($hour !~ /[0-9]{2}/) { $hour = &amp;quot;0$hour&amp;quot;; }&lt;br /&gt;
        if ($minute !~ /[0-9]{2}/) { $minute = &amp;quot;0$minute&amp;quot;; }&lt;br /&gt;
        if ($second !~ /[0-9]{2}/) { $second = &amp;quot;0$second&amp;quot;; }&lt;br /&gt;
        return $hour, $minute, $second;&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub ymdToDate { #takes year, month, day and assembles them into our date format&lt;br /&gt;
    my $year = shift;&lt;br /&gt;
    my $month = shift;&lt;br /&gt;
    my $day = shift;&lt;br /&gt;
    if (defined($year) &amp;amp;&amp;amp; defined($month) &amp;amp;&amp;amp; defined ($day)) {&lt;br /&gt;
        $month = sprintf(&amp;quot;%02d&amp;quot;, $month);&lt;br /&gt;
        $day = sprintf(&amp;quot;%02d&amp;quot;, $day);&lt;br /&gt;
        return &amp;quot;$year-$month-$day&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub hmsToTime { #takes hour minute and second and assembles them into our time format&lt;br /&gt;
    my $hour = shift;&lt;br /&gt;
    my $minute = shift;&lt;br /&gt;
    my $second = shift;&lt;br /&gt;
    if (defined($hour) &amp;amp;&amp;amp; defined($minute) &amp;amp;&amp;amp; defined ($second)) {&lt;br /&gt;
        if ($hour !~ /[0-9]{2}/) { $hour = &amp;quot;0$hour&amp;quot;; }&lt;br /&gt;
        if ($minute !~ /[0-9]{2}/) { $minute = &amp;quot;0$minute&amp;quot;; }&lt;br /&gt;
        if ($second !~ /[0-9]{2}/) { $second = &amp;quot;0$second&amp;quot;; }&lt;br /&gt;
        return sprintf (&amp;quot;%02d:%02d:%02d&amp;quot;, $hour, $minute, $second);&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub CompareDates { #compares two date and time pairs&lt;br /&gt;
    my $date1 = shift;&lt;br /&gt;
    my $time1 = shift;&lt;br /&gt;
    my $date2 = shift;&lt;br /&gt;
    my $time2 = shift;&lt;br /&gt;
&lt;br /&gt;
    my ($year1, $month1, $day1) = &amp;amp;DateToymd($date1);&lt;br /&gt;
    my ($hour1, $minute1, $second1) = &amp;amp;TimeTohms($time1);&lt;br /&gt;
    my ($year2, $month2, $day2) = &amp;amp;DateToymd($date2);&lt;br /&gt;
    my ($hour2, $minute2, $second2) = &amp;amp;TimeTohms($time2);&lt;br /&gt;
&lt;br /&gt;
#    &amp;amp;debug(&amp;quot;$year1, $month1, $day1, $year2, $month2, $day2&amp;quot;);&lt;br /&gt;
    my $days = &amp;amp;Delta_Days($year1, $month1, $day1, $year2, $month2, $day2);&lt;br /&gt;
    if ($days &amp;gt; 0) { return 1;}&lt;br /&gt;
    if ($days &amp;lt; 0) { return -1;}&lt;br /&gt;
    if ($days == 0) { #same day, compare times&lt;br /&gt;
        my $seconds1 = $second1 + (60 * $minute1) + (3600 * $hour1);&lt;br /&gt;
        my $seconds2 = $second2 + (60 * $minute2) + (3600 * $hour2);&lt;br /&gt;
        if ($seconds1 &amp;lt; $seconds2) { return 1;}&lt;br /&gt;
        if ($seconds1 &amp;gt; $seconds2) { return -1;}&lt;br /&gt;
        if ($seconds1 == $seconds2) { return 0;}&lt;br /&gt;
    }&lt;br /&gt;
    return undef;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub DateWindow { #compares two date time pairs to see if they are &amp;lt; X seconds apart&lt;br /&gt;
    my $date1 = shift;&lt;br /&gt;
    my $time1 = shift;&lt;br /&gt;
    my $date2 = shift;&lt;br /&gt;
    my $time2 = shift;&lt;br /&gt;
    my $window = shift;&lt;br /&gt;
&lt;br /&gt;
    my ($year1, $month1, $day1) = &amp;amp;DateToymd($date1);&lt;br /&gt;
    my ($hour1, $minute1, $second1) = &amp;amp;TimeTohms($time1);&lt;br /&gt;
    my ($year2, $month2, $day2) = &amp;amp;DateToymd($date2);&lt;br /&gt;
    my ($hour2, $minute2, $second2) = &amp;amp;TimeTohms($time2);&lt;br /&gt;
&lt;br /&gt;
    my ($day, $hour, $minute, $second) =&lt;br /&gt;
        &amp;amp;Delta_DHMS($year1, $month1, $day1, $hour1, $minute1, $second1,&lt;br /&gt;
                    $year2, $month2, $day2, $hour2, $minute2, $second2);&lt;br /&gt;
    $minute *= 60;&lt;br /&gt;
    $hour *= 3600;&lt;br /&gt;
    $day *= 86400;&lt;br /&gt;
    my $total = $second + $minute + $hour + $day;&lt;br /&gt;
    if (abs($total) &amp;lt; $window) {&lt;br /&gt;
        return 1;&lt;br /&gt;
    }&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub CheckDateOrder { #takes three dates/times, returns true if they are in chronological order&lt;br /&gt;
    my $date1 = shift;&lt;br /&gt;
    my $time1 = shift;&lt;br /&gt;
    my $date2 = shift;&lt;br /&gt;
    my $time2 = shift;&lt;br /&gt;
    my $date3 = shift;&lt;br /&gt;
    my $time3 = shift;&lt;br /&gt;
    if (&amp;amp;CompareDates($date1, $time1, $date2, $time2) == -1) {&lt;br /&gt;
        return 0;&lt;br /&gt;
    }&lt;br /&gt;
    if (&amp;amp;CompareDates($date2, $time2, $date3, $time3) == -1) {&lt;br /&gt;
        return 0;&lt;br /&gt;
    }&lt;br /&gt;
    return 1;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub EpochSeconds { #calculates number of seconds since the epoch for the given date/time&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my $time = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my ($hour, $minute, $second) = &amp;amp;TimeTohms($time);&lt;br /&gt;
    my ($d, $h, $m, $s) = &amp;amp;Delta_DHMS(1970, 1, 1, 0, 0, 0,&lt;br /&gt;
                                      $year, $month, $day, $hour, $minute, $second);&lt;br /&gt;
    my $seconds = $s + (60 * $m) + (3600 * $h) + (86400 * $d);&lt;br /&gt;
    return $seconds;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub SecondsToDateTime { #converts seconds since epoch to date/time&lt;br /&gt;
    my $seconds = shift;&lt;br /&gt;
    my $days = int($seconds / 86400);&lt;br /&gt;
    $seconds -= $days * 86400;&lt;br /&gt;
    my $hours = int($seconds / 3600);&lt;br /&gt;
    $seconds -= $hours * 3600;&lt;br /&gt;
    my $minutes = int($seconds / 60);&lt;br /&gt;
    $seconds -= $minutes * 60;&lt;br /&gt;
    my ($year, $month, $day, $hour, $minute, $second) =&lt;br /&gt;
        &amp;amp;Add_Delta_DHMS(1970, 1, 1, 0, 0, 0, $days, $hours, $minutes, $seconds);&lt;br /&gt;
    $month = sprintf(&amp;quot;%02d&amp;quot;, $month);&lt;br /&gt;
    $day = sprintf(&amp;quot;%02d&amp;quot;, $day);&lt;br /&gt;
    $hour = sprintf(&amp;quot;%02d&amp;quot;, $hour);&lt;br /&gt;
    $minute = sprintf(&amp;quot;%02d&amp;quot;, $minute);&lt;br /&gt;
    $second = sprintf(&amp;quot;%02d&amp;quot;, $second);&lt;br /&gt;
    return &amp;quot;$year-$month-$day&amp;quot;, &amp;quot;$hour:$minute:$second&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub DateToDayName {&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    my $name = &amp;amp;Day_of_Week_to_Text(&amp;amp;Day_of_Week($year, $month, $day));&lt;br /&gt;
    $name =~ /^[A-Za-z]{3}/;&lt;br /&gt;
    $name = $&amp;amp;;&lt;br /&gt;
    return $name;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub ValiDate {&lt;br /&gt;
    return @_;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub CheckBusinessDay { # checks to see if date is business day. 1=yes, 0=no&lt;br /&gt;
    my $date = shift;&lt;br /&gt;
    my ($year, $month, $day) = &amp;amp;DateToymd($date);&lt;br /&gt;
    if (Day_of_Week($year,$month,$day) &amp;lt; 6) { return 1; }&lt;br /&gt;
    else { return 0; }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
1; #don&#039;t remove this line&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/home/flowbin/queue.pl&lt;br /&gt;
&lt;br /&gt;
Setup cronjob:&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
#move flow data into the queue&lt;br /&gt;
1,16,31,46 * * * * /usr/home/flowbin/queue.pl&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== flow processing: i2b ==== &lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/home/flowbin/processflows-sql.pl&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
#use strict;&lt;br /&gt;
#$debug=1;&lt;br /&gt;
#$dry=1;&lt;br /&gt;
&lt;br /&gt;
my $log = &#039;/usr/home/flowbin/discards.log&#039;;&lt;br /&gt;
&lt;br /&gt;
use Data::Dumper;&lt;br /&gt;
&lt;br /&gt;
BEGIN {&lt;br /&gt;
    push @INC, &amp;quot;/usr/home/flowbin&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#my $queuedir = &amp;quot;/usr/home/queue&amp;quot;;&lt;br /&gt;
my $queuedir = &amp;quot;/usr/home/working&amp;quot;;&lt;br /&gt;
my $archivedir = &amp;quot;/usr/home/archive&amp;quot;;&lt;br /&gt;
my $sqldir = &amp;quot;/usr/home/sql&amp;quot;;&lt;br /&gt;
my $sqldirworking = &amp;quot;/usr/home/sql/tmp&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
unless ($dry) {&lt;br /&gt;
    if (-e &amp;quot;$queuedir/.lock&amp;quot;) {&lt;br /&gt;
        open(FILE, &amp;quot;$queuedir/.lock&amp;quot;);&lt;br /&gt;
        my $pid = &amp;lt;FILE&amp;gt;;&lt;br /&gt;
        chomp($pid);&lt;br /&gt;
        close(FILE);&lt;br /&gt;
        if (kill(0, $pid)) {&lt;br /&gt;
            #another process is using the queue, bail out&lt;br /&gt;
            exit(0);&lt;br /&gt;
        }&lt;br /&gt;
        else {&lt;br /&gt;
            #dead lock file, remove it&lt;br /&gt;
            `rm $queuedir/.lock`;&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    open(FILE, &amp;quot;&amp;gt; $queuedir/.lock&amp;quot;);&lt;br /&gt;
    print FILE &amp;quot;$$\n&amp;quot;;&lt;br /&gt;
    close(FILE);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
opendir(DIR, $queuedir);&lt;br /&gt;
my @files = readdir(DIR);&lt;br /&gt;
closedir(DIR);&lt;br /&gt;
&lt;br /&gt;
foreach my $file (sort @files) {&lt;br /&gt;
    unless($file =~ /^\./) {&lt;br /&gt;
        $file =~ /([0-9]{4}-[0-9]{2}-[0-9]{2})\.([0-9]{2})([0-9]{2})([0-9]{2})/;&lt;br /&gt;
        my $date = &amp;quot;$1 $2:$3:$4&amp;quot;;&lt;br /&gt;
        my $outfile = &amp;quot;$1-$2:$3.sql&amp;quot;;&lt;br /&gt;
        unless (open (SQL, &amp;quot;+&amp;gt; $sqldirworking/$outfile&amp;quot;)) { die &amp;quot;cant open $sqldirworking/$outfile&amp;quot;; }&lt;br /&gt;
        my $condensedDate = $1;&lt;br /&gt;
        $condensedDate =~ s/-//g;&lt;br /&gt;
        my $iptotal = {};&lt;br /&gt;
        my $protototal = {};&lt;br /&gt;
        my $porttotal = {};&lt;br /&gt;
&lt;br /&gt;
        &amp;amp;debug(&amp;quot;started file $file at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        &amp;amp;debug(&amp;quot;getting raw flow data (flow-print)&amp;quot;);&lt;br /&gt;
        `cat $queuedir/$file | /usr/local/bin/flow-print -f 5 &amp;gt; /usr/home/working/tmp-$file`;&lt;br /&gt;
        &amp;amp;debug(&amp;quot;aggregating data at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        unless (open(DATA, &amp;quot;/usr/home/working/tmp-$file&amp;quot;)) { die &amp;quot;can&#039;t open: $!&amp;quot;; }&lt;br /&gt;
        LOOP: while (my $line = readline DATA) {&lt;br /&gt;
            my @d = split /[\s]+/, $line;&lt;br /&gt;
            if ($d[0] ne &#039;&#039; &amp;amp;&amp;amp; $d[0] ne &#039;Start&#039;) {&lt;br /&gt;
                my $addr = 0;&lt;br /&gt;
                my $port = 0;&lt;br /&gt;
&lt;br /&gt;
                #Start End Sif SrcIPaddress SrcP DIf DstIPaddress DstP P Fl Pkts Octets&lt;br /&gt;
                #0     1   2   3            4    5   6            7    8 9  10   11&lt;br /&gt;
                          #|&lt;br /&gt;
                          # outbound = 2, inbound = 1&lt;br /&gt;
&lt;br /&gt;
                my (@src_ip) = split &#039;\.&#039;, $d[3];&lt;br /&gt;
                my (@dst_ip) = split &#039;\.&#039;, $d[6];&lt;br /&gt;
&lt;br /&gt;
                if ($src_ip[0] == 69 &amp;amp;&amp;amp; $src_ip[1] == 55 &amp;amp;&amp;amp; ($src_ip[2] == 229 || $src_ip[2] == 231)) { # for i2b&lt;br /&gt;
                   $d[2] = 2;&lt;br /&gt;
                   # hack for outbound bulk traffic counted 2x&lt;br /&gt;
                   #if ($src_ip[2] == 231) { $d[11] /= 2; $d[10] /= 2; }&lt;br /&gt;
                }&lt;br /&gt;
                # note- this is where we filter out IPs only found at i2b&lt;br /&gt;
                elsif ($dst_ip[0] == 69 &amp;amp;&amp;amp; $dst_ip[1] == 55 &amp;amp;&amp;amp; ($dst_ip[2] == 229 || $dst_ip[2] == 231)) { # for i2b&lt;br /&gt;
                   $d[2] = 1;&lt;br /&gt;
                }&lt;br /&gt;
                else { next LOOP; }&lt;br /&gt;
&lt;br /&gt;
                if ($d[2] == 2) {&lt;br /&gt;
                    $addr = $d[3];&lt;br /&gt;
                    # if the dst-port is low, store that&lt;br /&gt;
                    if ($d[7] &amp;lt;= 1024) { $port = $d[7]; }&lt;br /&gt;
                    # if the src-port is low, store that&lt;br /&gt;
                    elsif ($d[4] &amp;lt;= 1024) { $port = $d[4]; }&lt;br /&gt;
                    else { $port = 99999; }&lt;br /&gt;
                }&lt;br /&gt;
                elsif ($d[2] == 1) {&lt;br /&gt;
                    $addr = $d[6];&lt;br /&gt;
                    # if the dst-port is high, assume its return traffic, try to store src-port if low&lt;br /&gt;
                    if ($d[7] &amp;gt; 1024) {&lt;br /&gt;
                        if ($d[4] &amp;lt;= 1024) { $port = $d[4]; }&lt;br /&gt;
                        else { $port = 99999; }&lt;br /&gt;
                    } else {&lt;br /&gt;
                        $port = $d[7];&lt;br /&gt;
                    }&lt;br /&gt;
                } else {&lt;br /&gt;
                    next LOOP;&lt;br /&gt;
                }&lt;br /&gt;
&lt;br /&gt;
                my (@ip) = split &#039;\.&#039;, $addr;&lt;br /&gt;
                unless ($ip[0] == 69) { next LOOP; }&lt;br /&gt;
                unless ($ip[1] == 55) { next LOOP; }&lt;br /&gt;
                unless ($ip[2] == 229 || $ip[2] == 231) { next LOOP; }&lt;br /&gt;
&lt;br /&gt;
                my $classC = &amp;quot;$ip[0]_$ip[1]_$ip[2]&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
#                          IP        dir&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
#&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
#&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
                $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
&lt;br /&gt;
                $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
&lt;br /&gt;
                $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        close(DATA);&lt;br /&gt;
        `rm /usr/home/working/tmp-$file`;&lt;br /&gt;
        &amp;amp;debug(&amp;quot;processing ip totals at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        foreach my $classC (keys(%{$iptotal})) {&lt;br /&gt;
            my @values;&lt;br /&gt;
            foreach my $ip (keys(%{$iptotal-&amp;gt;{$classC}})) {&lt;br /&gt;
                foreach my $dir (keys(%{$iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
                    my $octets = $iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
                    my $packets = $iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
#                    $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
                    if ($octets &amp;gt; 2147483647) {&lt;br /&gt;
                        my $ddir = $dir==1 ? &#039;in&#039; : &#039;out&#039;;&lt;br /&gt;
                        #print SQL &amp;quot;$date $ip $ddir $octets\n&amp;quot;;&lt;br /&gt;
#                        $octets = 0;&lt;br /&gt;
                    }&lt;br /&gt;
                    # dailyIpTotals.id = ip(no .&#039;s)-yyyymmdd-direction&lt;br /&gt;
                    my $id = &amp;quot;$ip-$condensedDate-$dir&amp;quot;;&lt;br /&gt;
                    $id =~ s/\.//g;&lt;br /&gt;
                    push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets)&amp;quot;;&lt;br /&gt;
                    my $sql = &amp;quot;insert into dailyIpTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
                    print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                    print SQL &amp;quot;$sql;\n&amp;quot;;&lt;br /&gt;
#                    $db-&amp;gt;query(&amp;quot;insert into ipTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets)&amp;quot;);&lt;br /&gt;
                }&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
            # break inserts into 100 records at a time&lt;br /&gt;
            &amp;amp;debug(&amp;quot;inserting $#values +1 values&amp;quot;);&lt;br /&gt;
            while ($#values &amp;gt; 0) {&lt;br /&gt;
                my $sql = &amp;quot;insert into ipTotals_$classC values &amp;quot;;&lt;br /&gt;
                my $max_index = $#values &amp;gt; 100 ? 100 : $#values;&lt;br /&gt;
                for (my $i=0; $i&amp;lt;=$max_index; $i++) {&lt;br /&gt;
                    $sql .= shift @values;&lt;br /&gt;
                    $sql .= &#039;,&#039;;&lt;br /&gt;
                }&lt;br /&gt;
                chop $sql;&lt;br /&gt;
                print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                print SQL &amp;quot;$sql;\n&amp;quot;;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
#        &amp;amp;debug(&amp;quot;processing protocol totals at &amp;quot;);&lt;br /&gt;
#        &amp;amp;debug(`date`);&lt;br /&gt;
#        foreach my $classC (keys(%{$protototal})) {&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;lock tables dailyProtoTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
#            my @values;&lt;br /&gt;
#            foreach my $ip (keys(%{$protototal-&amp;gt;{$classC}})) {&lt;br /&gt;
#                foreach my $dir (keys(%{$protototal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
#                    foreach my $proto (keys(%{$protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}})) {&lt;br /&gt;
#                        my $octets = $protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
#                        my $packets = $protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
# #                        $octets = $octets &amp;gt; 2147483647 ? 0 : $octets;&lt;br /&gt;
# #                        $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
#                        # dailyProtoTotals.id = ip(no .&#039;s)-yyyymmdd-direction-proto&lt;br /&gt;
#                        my $id = &amp;quot;$ip-$condensedDate-$dir-$proto&amp;quot;;&lt;br /&gt;
#                        $id =~ s/\.//g;&lt;br /&gt;
#                        push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets)&amp;quot;;&lt;br /&gt;
#                        my $sql = &amp;quot;insert into dailyProtoTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
#                        print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
#                        $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
# #                        $db-&amp;gt;query(&amp;quot;insert into protoTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets)&amp;quot;);&lt;br /&gt;
#                    }&lt;br /&gt;
#                }&lt;br /&gt;
#            }&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
#            my $sql = &amp;quot;insert into protoTotals_$classC values &amp;quot;;&lt;br /&gt;
#            $sql .= join &#039;,&#039;, @values;&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;lock tables protoTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
#            print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
#            $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
#        }&lt;br /&gt;
&lt;br /&gt;
        &amp;amp;debug(&amp;quot;processing port totals at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        foreach my $classC (keys(%{$porttotal})) {&lt;br /&gt;
            my @values;&lt;br /&gt;
            foreach my $ip (keys(%{$porttotal-&amp;gt;{$classC}})) {&lt;br /&gt;
                foreach my $dir (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
                    foreach my $proto (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}})) {&lt;br /&gt;
                        foreach my $port (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}})) {&lt;br /&gt;
                            my $octets = $porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
                            my $packets = $porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
    #                        $octets = $octets &amp;gt; 2147483647 ? 0 : $octets;&lt;br /&gt;
    #                        $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
&lt;br /&gt;
                            # dailyPortTotals.id = ip(no .&#039;s)-yyyymmdd-direction-protocol-port&lt;br /&gt;
                            my $id = &amp;quot;$ip-$condensedDate-$dir-$proto-$port&amp;quot;;&lt;br /&gt;
                            $id =~ s/\.//g;&lt;br /&gt;
                            push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $port, $octets, $packets)&amp;quot;;&lt;br /&gt;
                            my $sql = &amp;quot;insert into dailyPortTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $port, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
                            print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                            print SQL &amp;quot;$sql;\n&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    #                        $db-&amp;gt;query(&amp;quot;insert into portTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $port, $octets, $packets)&amp;quot;);&lt;br /&gt;
                        }&lt;br /&gt;
                    }&lt;br /&gt;
                }&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
            # break inserts into 100 records at a time&lt;br /&gt;
            &amp;amp;debug(&amp;quot;inserting $#values +1 values&amp;quot;);&lt;br /&gt;
            while ($#values &amp;gt; 0) {&lt;br /&gt;
                my $sql = &amp;quot;insert into portTotals_$classC values &amp;quot;;&lt;br /&gt;
                my $max_index = $#values &amp;gt; 100 ? 100 : $#values;&lt;br /&gt;
                for (my $i=0; $i&amp;lt;=$max_index; $i++) {&lt;br /&gt;
                    $sql .= shift @values;&lt;br /&gt;
                    $sql .= &#039;,&#039;;&lt;br /&gt;
                }&lt;br /&gt;
                chop $sql;&lt;br /&gt;
                print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                print SQL &amp;quot;$sql;\n&amp;quot;;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
#                       12     1 8      1    1= 23&lt;br /&gt;
# dailyIpTotals.id = ip(no .&#039;s)-yyyymmdd-direction&lt;br /&gt;
#                       12        1  8     1   1       3=26&lt;br /&gt;
# dailyProtoTotals.id = ip(no .&#039;s)-yyyymmdd-direction-proto&lt;br /&gt;
#                       12       1   8    1     1     5=28&lt;br /&gt;
# dailyPortTotals.id = ip(no .&#039;s)-yyyymmdd-direction-port&lt;br /&gt;
        #print &amp;quot;finished at &amp;quot;;&lt;br /&gt;
        #print `date`;&lt;br /&gt;
        `mv $queuedir/$file $archivedir` unless $dry;&lt;br /&gt;
        close(SQL);&lt;br /&gt;
        `bzip2 $sqldirworking/$outfile`;&lt;br /&gt;
        `mv $sqldirworking/$outfile.bz2 $sqldir/`;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
`rm $queuedir/.lock` unless $dry;&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $message = shift;&lt;br /&gt;
    if ($debug) {&lt;br /&gt;
        print &amp;quot;$message\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# var full during ft-v05.2005-03-28.084500-0800 and&lt;br /&gt;
# 2005-02-24 69.55.226&lt;br /&gt;
&lt;br /&gt;
# all port/daily totals before 2005-04-07&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script sends the sql files to the traffic server for processing:&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/home/flowbin/sendsql.pl&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
#use strict;&lt;br /&gt;
#$debug=1;&lt;br /&gt;
#$dry=1;&lt;br /&gt;
&lt;br /&gt;
my $remote = &amp;quot;69.55.233.199&amp;quot;;&lt;br /&gt;
my $sqldir = &amp;quot;/usr/home/sql&amp;quot;;&lt;br /&gt;
my $archive = &amp;quot;/usr/home/archive&amp;quot;;&lt;br /&gt;
my $sqldirremote = &amp;quot;/data/bwdb2/pending/&amp;quot;;&lt;br /&gt;
my @err;&lt;br /&gt;
unless ($dry) {&lt;br /&gt;
    if (-e &amp;quot;$sqldir/.lock&amp;quot;) {&lt;br /&gt;
        open(FILE, &amp;quot;$sqldir/.lock&amp;quot;);&lt;br /&gt;
        my $pid = &amp;lt;FILE&amp;gt;;&lt;br /&gt;
        chomp($pid);&lt;br /&gt;
        close(FILE);&lt;br /&gt;
        if (kill(0, $pid)) {&lt;br /&gt;
            #another process is using the queue, bail out&lt;br /&gt;
            exit(0);&lt;br /&gt;
        }&lt;br /&gt;
        else {&lt;br /&gt;
            #dead lock file, remove it&lt;br /&gt;
            `rm $sqldir/.lock`;&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    open(FILE, &amp;quot;&amp;gt; $sqldir/.lock&amp;quot;);&lt;br /&gt;
    print FILE &amp;quot;$$\n&amp;quot;;&lt;br /&gt;
    close(FILE);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
opendir(DIR, $sqldir);&lt;br /&gt;
my @files = readdir(DIR);&lt;br /&gt;
closedir(DIR);&lt;br /&gt;
&lt;br /&gt;
foreach my $file (sort @files) {&lt;br /&gt;
   next unless $file =~ /bz2$/;&lt;br /&gt;
&lt;br /&gt;
   my $r = `scp -Cq $sqldir/$file $remote:$sqldirremote 2&amp;gt;&amp;amp;1`;&lt;br /&gt;
#   print &amp;quot;scp $sqldir/$file $remote:$sqldirremote&amp;quot;;&lt;br /&gt;
   unless ($?==0) {&lt;br /&gt;
      push @err, &amp;quot;scp -Cq $sqldir/$file $remote:$sqldirremote ($r)&amp;quot;;&lt;br /&gt;
   }&lt;br /&gt;
   else {&lt;br /&gt;
      `mv $sqldir/$file $archive`;&lt;br /&gt;
      `ssh $remote mv $sqldirremote/$file $sqldirremote/${file}.done`;&lt;br /&gt;
   }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
`rm $sqldir/.lock` unless $dry;&lt;br /&gt;
&lt;br /&gt;
if (@err) {&lt;br /&gt;
   email_support(&#039;bwdb2: sendsql.pl error&#039;,join &amp;quot;\n&amp;quot;, @err);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub email_support {&lt;br /&gt;
    my $subj=shift;&lt;br /&gt;
    my $body=shift;&lt;br /&gt;
    use Mail::Sendmail;&lt;br /&gt;
&lt;br /&gt;
    # prepare message&lt;br /&gt;
    my %mail = (&lt;br /&gt;
        To      =&amp;gt; &#039;support@johncompanies.com,dave@johncompanies.com&#039;,&lt;br /&gt;
        From    =&amp;gt; &#039;support@johncompanies.com&#039;,&lt;br /&gt;
        Subject =&amp;gt; $subj,&lt;br /&gt;
        Message =&amp;gt; $body,&lt;br /&gt;
        smtp    =&amp;gt; &#039;mail.johncompanies.com&#039;,&lt;br /&gt;
    );&lt;br /&gt;
    sendmail(%mail) || warn &amp;quot;Error: $Mail::Sendmail::error&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $message = shift;&lt;br /&gt;
    if ($debug) {&lt;br /&gt;
        print &amp;quot;$message\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# var full during ft-v05.2005-03-28.084500-0800 and&lt;br /&gt;
# 2005-02-24 69.55.226&lt;br /&gt;
&lt;br /&gt;
# all port/daily totals before 2005-04-07&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
#process flows&lt;br /&gt;
2,17,32,47 * * * * /usr/home/flowbin/processflows-sql.pl&lt;br /&gt;
#move sql commands to traffic db&lt;br /&gt;
8,23,38,53 * * * * /usr/home/flowbin/sendsql.pl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== flow processing: castle ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/home/flowbin/processflows.pl&lt;br /&gt;
&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
#use strict;&lt;br /&gt;
#$debug=1;&lt;br /&gt;
#$dry=1;&lt;br /&gt;
&lt;br /&gt;
my $log = &#039;/usr/home/flowbin/discards.log&#039;;&lt;br /&gt;
&lt;br /&gt;
use Data::Dumper;&lt;br /&gt;
&lt;br /&gt;
BEGIN {&lt;br /&gt;
    push @INC, &amp;quot;/usr/home/flowbin&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
use db;&lt;br /&gt;
&lt;br /&gt;
#my $queuedir = &amp;quot;/usr/home/queue&amp;quot;;&lt;br /&gt;
my $queuedir = &amp;quot;/usr/home/working&amp;quot;;&lt;br /&gt;
my $archivedir = &amp;quot;/usr/home/archive&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
unless ($dry) {&lt;br /&gt;
    if (-e &amp;quot;$queuedir/.lock&amp;quot;) {&lt;br /&gt;
        open(FILE, &amp;quot;$queuedir/.lock&amp;quot;);&lt;br /&gt;
        my $pid = &amp;lt;FILE&amp;gt;;&lt;br /&gt;
        chomp($pid);&lt;br /&gt;
        close(FILE);&lt;br /&gt;
        if (kill(0, $pid)) {&lt;br /&gt;
            #another process is using the queue, bail out&lt;br /&gt;
            exit(0);&lt;br /&gt;
        }&lt;br /&gt;
        else {&lt;br /&gt;
            #dead lock file, remove it&lt;br /&gt;
            `rm $queuedir/.lock`;&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    open(FILE, &amp;quot;&amp;gt; $queuedir/.lock&amp;quot;);&lt;br /&gt;
    print FILE &amp;quot;$$\n&amp;quot;;&lt;br /&gt;
    close(FILE);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
my $db = db-&amp;gt;new();&lt;br /&gt;
$db-&amp;gt;connect(&#039;traffic&#039;, &#039;&#039;, &#039;root&#039;, &#039;5over3&#039;) || die $db-&amp;gt;{&#039;error&#039;};&lt;br /&gt;
&lt;br /&gt;
opendir(DIR, $queuedir);&lt;br /&gt;
my @files = readdir(DIR);&lt;br /&gt;
closedir(DIR);&lt;br /&gt;
&lt;br /&gt;
foreach my $file (sort @files) {&lt;br /&gt;
    unless($file =~ /^\./) {&lt;br /&gt;
        $file =~ /([0-9]{4}-[0-9]{2}-[0-9]{2})\.([0-9]{2})([0-9]{2})([0-9]{2})/;&lt;br /&gt;
        my $date = &amp;quot;$1 $2:$3:$4&amp;quot;;&lt;br /&gt;
        my $condensedDate = $1;&lt;br /&gt;
        $condensedDate =~ s/-//g;&lt;br /&gt;
        my $iptotal = {};&lt;br /&gt;
        my $protototal = {};&lt;br /&gt;
        my $porttotal = {};&lt;br /&gt;
&lt;br /&gt;
        &amp;amp;debug(&amp;quot;started file $file at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        &amp;amp;debug(&amp;quot;getting raw flow data (flow-print)&amp;quot;);&lt;br /&gt;
        `cat $queuedir/$file | /usr/local/bin/flow-print -f 5 &amp;gt; /usr/home/working/tmp-$file`;&lt;br /&gt;
        &amp;amp;debug(&amp;quot;aggregating data at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        unless (open(DATA, &amp;quot;/usr/home/working/tmp-$file&amp;quot;)) { die &amp;quot;can&#039;t open: $!&amp;quot;; }&lt;br /&gt;
        LOOP: while (my $line = readline DATA) {&lt;br /&gt;
            my @d = split /[\s]+/, $line;&lt;br /&gt;
            if ($d[0] ne &#039;&#039; &amp;amp;&amp;amp; $d[0] ne &#039;Start&#039;) {&lt;br /&gt;
                my $addr = 0;&lt;br /&gt;
                my $port = 0;&lt;br /&gt;
&lt;br /&gt;
                #Start End Sif SrcIPaddress SrcP DIf DstIPaddress DstP P Fl Pkts Octets&lt;br /&gt;
                #0     1   2   3            4    5   6            7    8 9  10   11&lt;br /&gt;
                          #|&lt;br /&gt;
                          # outbound = 2, inbound = 1&lt;br /&gt;
&lt;br /&gt;
                my (@src_ip) = split &#039;\.&#039;, $d[3];&lt;br /&gt;
                my (@dst_ip) = split &#039;\.&#039;, $d[6];&lt;br /&gt;
&lt;br /&gt;
                if ($src_ip[0] == 69 &amp;amp;&amp;amp; $src_ip[1] == 55 &amp;amp;&amp;amp;&lt;br /&gt;
                    $src_ip[2] &amp;gt;= 224 &amp;amp;&amp;amp; $src_ip[2] &amp;lt;= 239 &amp;amp;&amp;amp;&lt;br /&gt;
                    $src_ip[2] != 229 &amp;amp;&amp;amp; $src_ip[2] != 231) { # for castle&lt;br /&gt;
#                if ($src_ip[0] == 69 &amp;amp;&amp;amp; $src_ip[1] == 55 &amp;amp;&amp;amp; $src_ip[2] == 229) { # for i2b&lt;br /&gt;
                   $d[2] = 2;&lt;br /&gt;
                   # hack for outbound bulk traffic counted 2x&lt;br /&gt;
                   if ($dst_ip[2] == 234) { $d[11] /= 2; $d[10] /= 2; }&lt;br /&gt;
                }&lt;br /&gt;
                elsif ($dst_ip[0] == 69 &amp;amp;&amp;amp; $dst_ip[1] == 55 &amp;amp;&amp;amp;&lt;br /&gt;
                       $dst_ip[2] &amp;gt;= 224 &amp;amp;&amp;amp; $dst_ip[2] &amp;lt;= 239 &amp;amp;&amp;amp;&lt;br /&gt;
                       $dst_ip[2] != 229 &amp;amp;&amp;amp; $dst_ip[2] != 231) { # for castle&lt;br /&gt;
#                elsif ($dst_ip[0] == 69 &amp;amp;&amp;amp; $dst_ip[1] == 55 &amp;amp;&amp;amp; $dst_ip[2] == 229) { # for i2b&lt;br /&gt;
                   $d[2] = 1;&lt;br /&gt;
                }&lt;br /&gt;
                else { next LOOP; }&lt;br /&gt;
&lt;br /&gt;
                if ($d[2] == 2) {&lt;br /&gt;
                    $addr = $d[3];&lt;br /&gt;
                    # if the dst-port is low, store that&lt;br /&gt;
                    if ($d[7] &amp;lt;= 1024) { $port = $d[7]; }&lt;br /&gt;
                    # if the src-port is low, store that&lt;br /&gt;
                    elsif ($d[4] &amp;lt;= 1024) { $port = $d[4]; }&lt;br /&gt;
                    else { $port = 99999; }&lt;br /&gt;
                }&lt;br /&gt;
                elsif ($d[2] == 1) {&lt;br /&gt;
                    $addr = $d[6];&lt;br /&gt;
                    # if the dst-port is high, assume its return traffic, try to store src-port if low&lt;br /&gt;
                    if ($d[7] &amp;gt; 1024) {&lt;br /&gt;
                        if ($d[4] &amp;lt;= 1024) { $port = $d[4]; }&lt;br /&gt;
                        else { $port = 99999; }&lt;br /&gt;
                    } else {&lt;br /&gt;
                        $port = $d[7];&lt;br /&gt;
                    }&lt;br /&gt;
                } else {&lt;br /&gt;
                    next LOOP;&lt;br /&gt;
                }&lt;br /&gt;
&lt;br /&gt;
                my (@ip) = split &#039;\.&#039;, $addr;&lt;br /&gt;
                unless ($ip[0] == 69) { next LOOP; }&lt;br /&gt;
                unless ($ip[1] == 55) { next LOOP; }&lt;br /&gt;
                unless ($ip[2] &amp;gt;= 224 &amp;amp;&amp;amp; $ip[2] &amp;lt;= 239 &amp;amp;&amp;amp; $ip[2] != 229 &amp;amp;&amp;amp; $ip[2] != 231) { next LOOP; }&lt;br /&gt;
#                unless ($ip[2] == 229) { next LOOP; }&lt;br /&gt;
&lt;br /&gt;
                my $classC = &amp;quot;$ip[0]_$ip[1]_$ip[2]&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
#                          IP        dir&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
#&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
#&lt;br /&gt;
#                if ($d[10] &amp;lt; 2147483647) { $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;} += $d[10]; }&lt;br /&gt;
#                if ($d[11] &amp;lt; 2147483647) { $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;} += $d[11]; }&lt;br /&gt;
                $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $iptotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
&lt;br /&gt;
                $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $protototal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
&lt;br /&gt;
                $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;} += $d[10];&lt;br /&gt;
                $porttotal-&amp;gt;{$classC}-&amp;gt;{$addr}-&amp;gt;{$d[2]}-&amp;gt;{$d[8]}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;} += $d[11];&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
        close(DATA);&lt;br /&gt;
        `rm /usr/home/working/tmp-$file`;&lt;br /&gt;
        &amp;amp;debug(&amp;quot;processing ip totals at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        foreach my $classC (keys(%{$iptotal})) {&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;lock tables dailyIpTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
            my @values;&lt;br /&gt;
            foreach my $ip (keys(%{$iptotal-&amp;gt;{$classC}})) {&lt;br /&gt;
                foreach my $dir (keys(%{$iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
                    my $octets = $iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
                    my $packets = $iptotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
#                    $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
                    if ($octets &amp;gt; 2147483647) {&lt;br /&gt;
                        my $ddir = $dir==1 ? &#039;in&#039; : &#039;out&#039;;&lt;br /&gt;
                        `echo &amp;quot;$date $ip $ddir $octets\n&amp;quot; &amp;gt;&amp;gt; $log`;&lt;br /&gt;
#                        $octets = 0;&lt;br /&gt;
                    }&lt;br /&gt;
                    # dailyIpTotals.id = ip(no .&#039;s)-yyyymmdd-direction&lt;br /&gt;
                    my $id = &amp;quot;$ip-$condensedDate-$dir&amp;quot;;&lt;br /&gt;
                    $id =~ s/\.//g;&lt;br /&gt;
                    push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets)&amp;quot;;&lt;br /&gt;
                    my $sql = &amp;quot;insert into dailyIpTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
                    print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                    $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
#                    $db-&amp;gt;query(&amp;quot;insert into ipTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $octets, $packets)&amp;quot;);&lt;br /&gt;
                }&lt;br /&gt;
            }&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;lock tables ipTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
            # break inserts into 100 records at a time&lt;br /&gt;
            &amp;amp;debug(&amp;quot;inserting $#values +1 values&amp;quot;);&lt;br /&gt;
            while ($#values &amp;gt; 0) {&lt;br /&gt;
                my $sql = &amp;quot;insert into ipTotals_$classC values &amp;quot;;&lt;br /&gt;
                my $max_index = $#values &amp;gt; 100 ? 100 : $#values;&lt;br /&gt;
                for (my $i=0; $i&amp;lt;=$max_index; $i++) {&lt;br /&gt;
                    $sql .= shift @values;&lt;br /&gt;
                    $sql .= &#039;,&#039;;&lt;br /&gt;
                }&lt;br /&gt;
                chop $sql;&lt;br /&gt;
                print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
            }&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        sleep 20;&lt;br /&gt;
#        &amp;amp;debug(&amp;quot;processing protocol totals at &amp;quot;);&lt;br /&gt;
#        &amp;amp;debug(`date`);&lt;br /&gt;
#        foreach my $classC (keys(%{$protototal})) {&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;lock tables dailyProtoTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
#            my @values;&lt;br /&gt;
#            foreach my $ip (keys(%{$protototal-&amp;gt;{$classC}})) {&lt;br /&gt;
#                foreach my $dir (keys(%{$protototal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
#                    foreach my $proto (keys(%{$protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}})) {&lt;br /&gt;
#                        my $octets = $protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
#                        my $packets = $protototal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
# #                        $octets = $octets &amp;gt; 2147483647 ? 0 : $octets;&lt;br /&gt;
# #                        $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
#                        # dailyProtoTotals.id = ip(no .&#039;s)-yyyymmdd-direction-proto&lt;br /&gt;
#                        my $id = &amp;quot;$ip-$condensedDate-$dir-$proto&amp;quot;;&lt;br /&gt;
#                        $id =~ s/\.//g;&lt;br /&gt;
#                        push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets)&amp;quot;;&lt;br /&gt;
#                        my $sql = &amp;quot;insert into dailyProtoTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
#                        print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
#                        $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
# #                        $db-&amp;gt;query(&amp;quot;insert into protoTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $octets, $packets)&amp;quot;);&lt;br /&gt;
#                    }&lt;br /&gt;
#                }&lt;br /&gt;
#            }&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
#            my $sql = &amp;quot;insert into protoTotals_$classC values &amp;quot;;&lt;br /&gt;
#            $sql .= join &#039;,&#039;, @values;&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;lock tables protoTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
#            print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
#            $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
#            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
#        }&lt;br /&gt;
&lt;br /&gt;
        &amp;amp;debug(&amp;quot;processing port totals at &amp;quot;);&lt;br /&gt;
        &amp;amp;debug(`date`);&lt;br /&gt;
        foreach my $classC (keys(%{$porttotal})) {&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;lock tables dailyPortTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
            my @values;&lt;br /&gt;
            foreach my $ip (keys(%{$porttotal-&amp;gt;{$classC}})) {&lt;br /&gt;
                foreach my $dir (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}})) {&lt;br /&gt;
                    foreach my $proto (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}})) {&lt;br /&gt;
                        foreach my $port (keys(%{$porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}})) {&lt;br /&gt;
                            my $octets = $porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{$port}-&amp;gt;{&#039;octetTotal&#039;};&lt;br /&gt;
                            my $packets = $porttotal-&amp;gt;{$classC}-&amp;gt;{$ip}-&amp;gt;{$dir}-&amp;gt;{$proto}-&amp;gt;{$port}-&amp;gt;{&#039;pktTotal&#039;};&lt;br /&gt;
    #                        $octets = $octets &amp;gt; 2147483647 ? 0 : $octets;&lt;br /&gt;
    #                        $packets = $packets &amp;gt; 2147483647 ? 0 : $packets;&lt;br /&gt;
&lt;br /&gt;
                            # dailyPortTotals.id = ip(no .&#039;s)-yyyymmdd-direction-protocol-port&lt;br /&gt;
                            my $id = &amp;quot;$ip-$condensedDate-$dir-$proto-$port&amp;quot;;&lt;br /&gt;
                            $id =~ s/\.//g;&lt;br /&gt;
                            push @values, &amp;quot;(&#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $port, $octets, $packets)&amp;quot;;&lt;br /&gt;
                            my $sql = &amp;quot;insert into dailyPortTotals_$classC values (&#039;$id&#039;, &#039;$date&#039;, &#039;$ip&#039;, $dir, $proto, $port, $octets, $packets) ON DUPLICATE KEY UPDATE octets=octets+$octets, packets=packets+$packets&amp;quot;;&lt;br /&gt;
                            print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                            $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
    #                        $db-&amp;gt;query(&amp;quot;insert into portTotals values (&#039;$date&#039;, &#039;$ip&#039;, $dir, $port, $octets, $packets)&amp;quot;);&lt;br /&gt;
                        }&lt;br /&gt;
                    }&lt;br /&gt;
                }&lt;br /&gt;
            }&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;lock tables portTotals_$classC write&amp;quot;) unless $dry;&lt;br /&gt;
            # break inserts into 100 records at a time&lt;br /&gt;
            &amp;amp;debug(&amp;quot;inserting $#values +1 values&amp;quot;);&lt;br /&gt;
            while ($#values &amp;gt; 0) {&lt;br /&gt;
                my $sql = &amp;quot;insert into portTotals_$classC values &amp;quot;;&lt;br /&gt;
                my $max_index = $#values &amp;gt; 100 ? 100 : $#values;&lt;br /&gt;
                for (my $i=0; $i&amp;lt;=$max_index; $i++) {&lt;br /&gt;
                    $sql .= shift @values;&lt;br /&gt;
                    $sql .= &#039;,&#039;;&lt;br /&gt;
                }&lt;br /&gt;
                chop $sql;&lt;br /&gt;
                print &amp;quot;$sql\n&amp;quot; if $dry;&lt;br /&gt;
                $db-&amp;gt;query($sql) unless $dry;&lt;br /&gt;
            }&lt;br /&gt;
            $db-&amp;gt;query(&amp;quot;unlock tables&amp;quot;) unless $dry;&lt;br /&gt;
            sleep 10;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
#                       12     1 8      1    1= 23&lt;br /&gt;
# dailyIpTotals.id = ip(no .&#039;s)-yyyymmdd-direction&lt;br /&gt;
#                       12        1  8     1   1       3=26&lt;br /&gt;
# dailyProtoTotals.id = ip(no .&#039;s)-yyyymmdd-direction-proto&lt;br /&gt;
#                       12       1   8    1     1     5=28&lt;br /&gt;
# dailyPortTotals.id = ip(no .&#039;s)-yyyymmdd-direction-port&lt;br /&gt;
        #print &amp;quot;finished at &amp;quot;;&lt;br /&gt;
        #print `date`;&lt;br /&gt;
        `mv $queuedir/$file $archivedir` unless $dry;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
`rm $queuedir/.lock` unless $dry;&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $message = shift;&lt;br /&gt;
    if ($debug) {&lt;br /&gt;
        print &amp;quot;$message\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# var full during ft-v05.2005-03-28.084500-0800 and&lt;br /&gt;
# 2005-02-24 69.55.226&lt;br /&gt;
&lt;br /&gt;
# all port/daily totals before 2005-04-07&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /usr/home/flowbin/db.pm&lt;br /&gt;
&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
#&lt;br /&gt;
# $Header: /usr/cvs/newgw/lib/db.pm,v 1.4 2003/06/05 18:20:01 glenn Exp $&lt;br /&gt;
#&lt;br /&gt;
# Copyright (c) 2003&lt;br /&gt;
#      e-Monitoring Networks, Inc.  All rights reserved.&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
package db;&lt;br /&gt;
&lt;br /&gt;
use strict;&lt;br /&gt;
use DBI;&lt;br /&gt;
&lt;br /&gt;
sub new {&lt;br /&gt;
    my $class = shift;&lt;br /&gt;
    my $self = {};&lt;br /&gt;
&lt;br /&gt;
    $self-&amp;gt;{&#039;debug&#039;} = 0;&lt;br /&gt;
    bless $self, $class;&lt;br /&gt;
&lt;br /&gt;
    return $self;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub connect {&lt;br /&gt;
    my $self = shift;&lt;br /&gt;
    my $dbname = shift;&lt;br /&gt;
    my $dbhost = shift;&lt;br /&gt;
    my $dbuser = shift;&lt;br /&gt;
    my $dbpass = shift;&lt;br /&gt;
&lt;br /&gt;
    my $host = &#039;&#039;;&lt;br /&gt;
    if (defined($dbhost)) {&lt;br /&gt;
        $host = &amp;quot;;host=$dbhost&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eval {&lt;br /&gt;
        $self-&amp;gt;debug(&amp;quot;connecting to: DBI:mysql:database=$dbname;$host&amp;quot;, 1);&lt;br /&gt;
        $self-&amp;gt;{&#039;dbh&#039;} = DBI-&amp;gt;connect(&amp;quot;DBI:mysql:database=$dbname;$host&amp;quot;, $dbuser, $dbpass);&lt;br /&gt;
    };&lt;br /&gt;
    if ($self-&amp;gt;{&#039;dbh&#039;}) {&lt;br /&gt;
        return 1;&lt;br /&gt;
    }&lt;br /&gt;
    $self-&amp;gt;{&#039;error&#039;} = &amp;quot;Error connecting to database $@&amp;quot;;&lt;br /&gt;
    $self-&amp;gt;debug(&amp;quot;Error connecting to database $@&amp;quot;);&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub query {&lt;br /&gt;
    my $self = shift;&lt;br /&gt;
    my $query = shift;&lt;br /&gt;
&lt;br /&gt;
    $self-&amp;gt;debug($query, 1);&lt;br /&gt;
    my $sth;&lt;br /&gt;
    eval {&lt;br /&gt;
        $sth = $self-&amp;gt;{&#039;dbh&#039;}-&amp;gt;prepare($query);&lt;br /&gt;
    };&lt;br /&gt;
    unless ($sth) {&lt;br /&gt;
        $self-&amp;gt;{&#039;error&#039;} = &amp;quot;error preparing query $@&amp;quot;;&lt;br /&gt;
        $self-&amp;gt;debug(&amp;quot;error preparing query $@&amp;quot;);&lt;br /&gt;
        return undef;&lt;br /&gt;
    }&lt;br /&gt;
    my $qty;&lt;br /&gt;
    eval {&lt;br /&gt;
        $qty = $sth-&amp;gt;execute;&lt;br /&gt;
    };&lt;br /&gt;
    unless ($qty) {&lt;br /&gt;
        $self-&amp;gt;{&#039;error&#039;} = &amp;quot;error executing query $@&amp;quot;;&lt;br /&gt;
        warn &amp;quot;error executing query $@ $query&amp;quot;;&lt;br /&gt;
        return undef;&lt;br /&gt;
    }&lt;br /&gt;
    $self-&amp;gt;debug(&amp;quot;returning $qty, $sth from query&amp;quot;, 6);&lt;br /&gt;
    return ($qty, $sth);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub disconnect {&lt;br /&gt;
    my $self = shift;&lt;br /&gt;
&lt;br /&gt;
    $self-&amp;gt;{&#039;dbh&#039;}-&amp;gt;disconnect;&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $self = shift;&lt;br /&gt;
    my $msg = shift;&lt;br /&gt;
    my $level = shift || 0;&lt;br /&gt;
&lt;br /&gt;
    if ($level &amp;lt; $self-&amp;gt;{&#039;debug&#039;}) {&lt;br /&gt;
        print &amp;quot;$msg\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
    return 0;&lt;br /&gt;
}&lt;br /&gt;
1;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 mkdir /usr/home/archive&lt;br /&gt;
 mkdir -p /usr/home/sql/tmp&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
#process flows&lt;br /&gt;
2,17,32,47 * * * * /usr/home/flowbin/processflows.pl&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== setup traffic db ====&lt;br /&gt;
* Install mysql:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/ports/databases/mysql50-server&lt;br /&gt;
make install clean&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 cat &amp;gt;&amp;gt; /etc/rc.conf&lt;br /&gt;
 mysql_enable=&amp;quot;YES&amp;quot; &lt;br /&gt;
&lt;br /&gt;
Move db data dir:&lt;br /&gt;
 /usr/local/etc/rc.d/mysql-server stop&lt;br /&gt;
 mkdir /usr/home/database/&lt;br /&gt;
 mv /var/db/mysql/* /usr/home/database/&lt;br /&gt;
 chown -R mysql:mysql /usr/home/database&lt;br /&gt;
&lt;br /&gt;
Edit database location in startup script:&lt;br /&gt;
 vi /usr/local/etc/rc.d/mysql-server&lt;br /&gt;
 # : ${mysql_dbdir=&amp;quot;/var/db/mysql&amp;quot;}&lt;br /&gt;
 : ${mysql_dbdir=&amp;quot;/usr/home/database&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
 /usr/local/etc/rc.d/mysql-server start&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Install mysql perl database modules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /usr/ports/databases/p5-DBI&lt;br /&gt;
make install clean&lt;br /&gt;
cd /usr/ports/databases/p5-DBD-mysql50&lt;br /&gt;
make install clean&lt;br /&gt;
(no to SSL support)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Setting up database&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rehash&lt;br /&gt;
/usr/local/etc/rc.d/mysql-server start&lt;br /&gt;
mysql -u root&lt;br /&gt;
create database traffic;&lt;br /&gt;
grant all on *.* to root@localhost identified by &#039;5over3&#039;;&lt;br /&gt;
grant all on traffic.* to jc@10.1.4.5 identified by &#039;2gMKY3Wt&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If this was a new server we&#039;d setup new tables. See [[#mysql_2|mysql]] for how those tables would be setup.&lt;br /&gt;
&lt;br /&gt;
We are assuming here we are moving data from an existing db, here&#039;s how that&#039;s done (from the current traffic db):&lt;br /&gt;
 rsync -av --progress /usr/home/database/traffic/ 10.1.4.203:/usr/home/database/traffic/&lt;br /&gt;
&lt;br /&gt;
When you&#039;re ready to do the cutover, shut down mysql on both hosts and do one last sync.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== process flows from bwdb2 ====&lt;br /&gt;
On traffic database server (bwdb):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;crontab -e&lt;br /&gt;
#import sql from bwdb2&lt;br /&gt;
10,25,40,55 * * * * /usr/home/flowbin/processsql.pl&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add access to mysql: &lt;br /&gt;
&amp;lt;pre&amp;gt;mysql -u root -p &lt;br /&gt;
grant all on traffic.* to bwdb2@localhost identified by &#039;s1lver4d&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/home/flowbin/processsql.pl&lt;br /&gt;
&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
#use strict;&lt;br /&gt;
#$debug=1;&lt;br /&gt;
#$dry=1;&lt;br /&gt;
&lt;br /&gt;
my $sqldir = &amp;quot;/usr/home/bwdb2/pending&amp;quot;;&lt;br /&gt;
my $mysql = &#039;/usr/local/bin/mysql&#039;;&lt;br /&gt;
my @err;&lt;br /&gt;
unless ($dry) {&lt;br /&gt;
    if (-e &amp;quot;$sqldir/.lock&amp;quot;) {&lt;br /&gt;
        open(FILE, &amp;quot;$sqldir/.lock&amp;quot;);&lt;br /&gt;
        my $pid = &amp;lt;FILE&amp;gt;;&lt;br /&gt;
        chomp($pid);&lt;br /&gt;
        close(FILE);&lt;br /&gt;
        if (kill(0, $pid)) {&lt;br /&gt;
            #another process is using the queue, bail out&lt;br /&gt;
            exit(0);&lt;br /&gt;
        }&lt;br /&gt;
        else {&lt;br /&gt;
            #dead lock file, remove it&lt;br /&gt;
            `rm $sqldir/.lock`;&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    open(FILE, &amp;quot;&amp;gt; $sqldir/.lock&amp;quot;);&lt;br /&gt;
    print FILE &amp;quot;$$\n&amp;quot;;&lt;br /&gt;
    close(FILE);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
opendir(DIR, $sqldir);&lt;br /&gt;
my @files = readdir(DIR);&lt;br /&gt;
closedir(DIR);&lt;br /&gt;
&lt;br /&gt;
foreach my $file (sort @files) {&lt;br /&gt;
   next unless $file =~ /done$/;&lt;br /&gt;
   my $r = `bzcat $sqldir/$file | $mysql -u bwdb2 -ps1lver4d traffic`;&lt;br /&gt;
   unless ($?==0) {&lt;br /&gt;
      push @err, &amp;quot;bzcat $sqldir/$file | $mysql -u bwdb2 -pxxxxx traffic ($r)&amp;quot;;&lt;br /&gt;
   }&lt;br /&gt;
   else {&lt;br /&gt;
      `rm $sqldir/$file`;&lt;br /&gt;
   }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
`rm $sqldir/.lock` unless $dry;&lt;br /&gt;
&lt;br /&gt;
if (@err) {&lt;br /&gt;
   email_support(&#039;bwdb: processsql.pl error&#039;,join &amp;quot;\n&amp;quot;, @err);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub email_support {&lt;br /&gt;
    my $subj=shift;&lt;br /&gt;
    my $body=shift;&lt;br /&gt;
    use Mail::Sendmail;&lt;br /&gt;
&lt;br /&gt;
    # prepare message&lt;br /&gt;
    my %mail = (&lt;br /&gt;
        To      =&amp;gt; &#039;dave@johncompanies.com&#039;,&lt;br /&gt;
        From    =&amp;gt; &#039;support@johncompanies.com&#039;,&lt;br /&gt;
        Subject =&amp;gt; $subj,&lt;br /&gt;
        Message =&amp;gt; $body,&lt;br /&gt;
        smtp    =&amp;gt; &#039;mail.johncompanies.com&#039;,&lt;br /&gt;
    );&lt;br /&gt;
    sendmail(%mail) || warn &amp;quot;Error: $Mail::Sendmail::error&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sub debug {&lt;br /&gt;
    my $message = shift;&lt;br /&gt;
    if ($debug) {&lt;br /&gt;
        print &amp;quot;$message\n&amp;quot;;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/home/flowbin/processsql.pl&lt;br /&gt;
&lt;br /&gt;
Make sure bwdb is reachable from the outside only to bwdb2:&lt;br /&gt;
&lt;br /&gt;
On nat, add to &amp;lt;tt&amp;gt;/etc/ipnat.rules&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# bwdb&lt;br /&gt;
bimap fxp0 10.1.4.203/32 -&amp;gt; 69.55.233.199/32&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reload:&lt;br /&gt;
 ipnat -C -F -f /etc/ipnat.rules&lt;br /&gt;
&lt;br /&gt;
Setup firewall rule on firewall:&lt;br /&gt;
 ipfw add 00094 allow ip from 66.181.18.5 to 69.55.233.199 22&lt;br /&gt;
 ipfw add 00094 deny ip from any to 69.55.233.199&lt;br /&gt;
&lt;br /&gt;
Setup firewall on bwdb to restrict access now that it&#039;s nat&#039;d:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt;&amp;gt; /usr/local/etc/rc.d/boot.sh&lt;br /&gt;
ipfw add 1 allow tcp from any to any established&lt;br /&gt;
ipfw add 2 allow ip from 10.1.4.0/24,66.181.18.5,69.55.233.195 to me 22&lt;br /&gt;
ipfw add 3 allow ip from 10.1.4.5 to me 3306&lt;br /&gt;
ipfw add 4 allow ip from 69.55.225.225 53 to me &lt;br /&gt;
ipfw add 5 allow ip from 69.55.230.2 25 to me &lt;br /&gt;
ipfw add 6 allow ip from me to me 4444&lt;br /&gt;
ipfw add 7 allow icmp from any to me&lt;br /&gt;
ipfw add 8 allow udp from 10.1.4.203 to 10.1.4.203 dst-port 4444&lt;br /&gt;
ipfw add 9 allow udp from 10.1.4.5 to me 161&lt;br /&gt;
ipfw add 100 deny ip from any to me&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/local/etc/rc.d/boot.sh&lt;br /&gt;
&lt;br /&gt;
From bwdb2, add ssh key:&lt;br /&gt;
 cat /root/.ssh/id_dsa.pub | ssh 69.55.233.199 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
Confirm no password access:&lt;br /&gt;
 ssh 69.55.233.199 hostname&lt;br /&gt;
&lt;br /&gt;
= bwdb2 =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine tracks and stores network traffic (netflow) at i2b. It is our means to monitor customer bandwidth usage.&lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab6&lt;br /&gt;
* OS: FreeBSD 6.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.2.4 There are 2 onboard nic&#039;s, one of which is the &amp;quot;listener&amp;quot;&lt;br /&gt;
* Hardware: Custom 2U. Single power supply. &lt;br /&gt;
* Drives: two 150 GB (2 x 150GB) RAID1 arrays running on a 3ware 7006 RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* netflow&lt;br /&gt;
* bigbrother &lt;br /&gt;
&lt;br /&gt;
== netflow ==&lt;br /&gt;
&lt;br /&gt;
The main function of this server is to run netflow on an eth device in promiscuous mode so as to hear everything happening on the port (wherein all network traffic is mirrored to that port via the cisco swith). Every 15min, it creates a flow file under &amp;lt;tt&amp;gt;/usr/home/flows/&amp;lt;/tt&amp;gt; (organized by date). The flow file contains all traffic data for a 15min increment of time.&lt;br /&gt;
&lt;br /&gt;
A cronjob moves that flow file (or files if there are multiple due to some delay) &lt;br /&gt;
 1,16,31,46 * * * * /usr/home/flowbin/queue.pl&lt;br /&gt;
&lt;br /&gt;
into a processing queue:&lt;br /&gt;
&amp;lt;tt&amp;gt;/usr/home/working&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then a separate file processes whatever flow files it finds there, and builds sql files ready for insertion into the traffic database:&lt;br /&gt;
 2,17,32,47 * * * * /usr/home/flowbin/processflows-sql.pl&lt;br /&gt;
&lt;br /&gt;
Then yet another process copies the sql files to the traffic database server for processing and insertion into the mysql database:&lt;br /&gt;
 8,23,38,53 * * * * /usr/home/flowbin/sendsql.pl&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Adaptec_Controllers|Check RAID array]]&lt;br /&gt;
&lt;br /&gt;
* if space becomes tight, move sql files and flow files to backup server, both located in &amp;lt;tt&amp;gt;/usr/home/flowbin/archive&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= firewall (newgateway) =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine is the primary (only) firewall for the entire network at castle. &lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-8&lt;br /&gt;
* OS: FreeBSD 4.11 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.223, Pub IPs: 69.55.233.164 (external), 69.55.233.156 (internal). It has 3 network connections (2 onboard, 1 PCI) connecting to the external, internal and private networks. If you&#039;re looking at the back of the server, the internal-network-facing nic is on the right (em1), and the external-facing-network (3750) is on the left (em0).&lt;br /&gt;
* Hardware: 6 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: 36 GB (2 x 36GB) RAID1 array running on an Adaptec 2120S PCI RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* firewall (ipfw)&lt;br /&gt;
* snmp&lt;br /&gt;
* bigbrother&lt;br /&gt;
&lt;br /&gt;
== Firewall Rule Configuration ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Firewall_Rule_Configuration|Firewall Rule Configuration]] for more discussion on how to actually manipulate firewall rules.&lt;br /&gt;
&lt;br /&gt;
== Disaster Recovery ==&lt;br /&gt;
&lt;br /&gt;
If there is ever an outage with the firewall, the old firewall &amp;quot;gate&amp;quot; is located just below and is running with the proper network configuration, but with no firewall rules in place (to facilitate good throughput). Have castle move the cable on the left on the current firewall to the left port in the old firewall and the right cable to the right port.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what you need to put in /etc/rc.conf to get a firewall going (as far as routes and IPs)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;hostname=&amp;quot;newgateway.johncompanies.com&amp;quot;&lt;br /&gt;
firewall_script=&amp;quot;/etc/firewall.sh&amp;quot;&lt;br /&gt;
firewall_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NONE&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
snmpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
#snmpd_flags=&amp;quot;-as -p /var/run/snmpd.pid&amp;quot;&lt;br /&gt;
#ipnat_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
#ipnat_rules=&amp;quot;/etc/ipnat.rules&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;69.55.233.161&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_xl0=&amp;quot;inet 10.1.4.223 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;inet 69.55.233.164 netmask 255.255.255.248&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# Original JohnCompanies 69.55.224.0/20&lt;br /&gt;
#&lt;br /&gt;
ifconfig_em1=&amp;quot;inet 69.55.233.156 netmask 255.255.255.248&amp;quot;&lt;br /&gt;
&lt;br /&gt;
static_routes=&amp;quot;route1 route2 route3 route4 route5 route6 route7 route8 route9 route10 route11 route1&lt;br /&gt;
2 route13 route14 route15 route16 route17 route18&amp;quot;&lt;br /&gt;
&lt;br /&gt;
route_route1=&amp;quot;-net 69.55.224.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route2=&amp;quot;-net 69.55.225.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route3=&amp;quot;-net 69.55.226.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route4=&amp;quot;-net 69.55.227.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route5=&amp;quot;-net 69.55.228.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route6=&amp;quot;-net 69.55.229.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route7=&amp;quot;-net 69.55.230.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route8=&amp;quot;-net 69.55.231.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route9=&amp;quot;-net 69.55.232.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route10=&amp;quot;-net 69.55.233.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route11=&amp;quot;-net 69.55.234.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route12=&amp;quot;-net 69.55.235.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route13=&amp;quot;-net 69.55.236.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route14=&amp;quot;-net 69.55.237.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route15=&amp;quot;-net 69.55.238.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route16=&amp;quot;-net 69.55.239.0 69.55.233.153&amp;quot;&lt;br /&gt;
route_route17=&amp;quot;-net 10.1.5.0 10.1.4.2&amp;quot;&lt;br /&gt;
route_route18=&amp;quot;-net 10.1.6.0 10.1.4.2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
#In case of 3750 failure:&lt;br /&gt;
#defaultrouter=&amp;quot;69.43.128.81&amp;quot;&lt;br /&gt;
#ifconfig_em0=&amp;quot;inet 69.43.129.84 netmask 255.255.255.248&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#bind .1&#039;s here:&lt;br /&gt;
#ifconfig_em1=&amp;quot;inet 69.55.224.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias0=&amp;quot;inet 69.55.225.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias1=&amp;quot;inet 69.55.226.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias2=&amp;quot;inet 69.55.227.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias3=&amp;quot;inet 69.55.228.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias4=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias5=&amp;quot;inet 69.55.230.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias6=&amp;quot;inet 69.55.231.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias7=&amp;quot;inet 69.55.232.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias8=&amp;quot;inet 69.55.233.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias9=&amp;quot;inet 69.55.234.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias10=&amp;quot;inet 69.55.235.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias11=&amp;quot;inet 69.55.236.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias12=&amp;quot;inet 69.55.237.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias13=&amp;quot;inet 69.55.238.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
#ifconfig_em1_alias14=&amp;quot;inet 69.55.239.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#bulk:&lt;br /&gt;
# reassign 69.55.231.1 to the int iface on the firewall&lt;br /&gt;
# set the DG on the firewall to 69.43.138.9&lt;br /&gt;
# set the ext firewall IP to 69.43.138.12, NM: 255.255.255.248&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
 1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
Backup to backup1&lt;br /&gt;
&lt;br /&gt;
 0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
Reset counters and remove pipe rules on the 1st of the month. Pay attention when setting up a rule as 3 4 5 (that&#039;s not a temporary traffic cap). &lt;br /&gt;
&lt;br /&gt;
Inside &amp;lt;tt&amp;gt;/etc/daily.local&amp;lt;/tt&amp;gt; you will see a call to &amp;lt;tt&amp;gt;/etc/makepiperules.pl&amp;lt;/tt&amp;gt;&lt;br /&gt;
This script will create &amp;lt;tt&amp;gt;/etc/firewall.sh&amp;lt;/tt&amp;gt; which contains all the firewall and pipe rules in place at the time the script was run.&lt;br /&gt;
&lt;br /&gt;
== DOS attacks ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Handling_a_DoS_attack|Handling a DoS attack]] regarding how to handle a DOS attack.&lt;br /&gt;
&lt;br /&gt;
Theres a background process (running from user shell) that monitors the firewall for incoming UDP DoS attacks. When it notices packets above a certain level it will&lt;br /&gt;
# enter a rule that allows all UDP to go through&lt;br /&gt;
# send an emergency email to support and indicating an attack is in progress&lt;br /&gt;
# send an email to castle (nocstaff@castleaccess.com and jcsupport@castleaccess.com) telling them to investigate and put up a null if warranted&lt;br /&gt;
# wait for a couple minutes to see if the attack subsides- if so it will remove the pass-all UDP rule, if not it will repeat the process from #1&lt;br /&gt;
This file lives under /usr/home/user/doswatch.pl&lt;br /&gt;
To run:&lt;br /&gt;
 cd /usr/home/user&lt;br /&gt;
 ./doswatch.pl &amp;amp;&lt;br /&gt;
&lt;br /&gt;
To kill;&lt;br /&gt;
 fg&lt;br /&gt;
 ^C&lt;br /&gt;
&lt;br /&gt;
It writes its findings to /usr/home/user/doswatch.log&lt;br /&gt;
&lt;br /&gt;
= backup1 =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine acts as the primary backup location for all VPS-based customers. No customer directly accesses this server to perform their backups. We also store cancelled customers on this server. &lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-8&lt;br /&gt;
* OS: Ubuntu 8.04.1 server x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.8, Pub IP: 69.55.230.11 (firewalled from all but JC infrastructure @ i2b)&lt;br /&gt;
* Hardware: 6 SATA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Single power supply. &lt;br /&gt;
* Drives: 4.5 TB (6 x 1TB) RAID5 array running on a 3ware 9650SE-8LPML (8-port) card&lt;br /&gt;
&lt;br /&gt;
== Services provided ==&lt;br /&gt;
* backup via rsync&lt;br /&gt;
* mysql - traffic data&lt;br /&gt;
* nfs server - for backups&lt;br /&gt;
* snmp client - for big brother&lt;br /&gt;
* bigbrother client&lt;br /&gt;
&lt;br /&gt;
== Usage and Notes ==&lt;br /&gt;
* all data is stored under /data&lt;br /&gt;
* virtually all jc infrastructure, and all VPS machines are setup to mount to backup1 via nfs (mountpoint: &amp;lt;tt&amp;gt;/backup1&amp;lt;/tt&amp;gt;), and they all have their ssh keys setup to allow passwordless rsync&#039;s&lt;br /&gt;
* each virt or jail backs up each evening to backup1. Each server has it&#039;s own directory (named for the server). Under those directories are 7 daily snapshots (0-6)&lt;br /&gt;
* at the time of writing, the mysql server running here is replicating from (slave to) the mysql instance on bwdb. Requests for bandwidth data usage for customers (coming from management, account manager, and accounting scripts running on mail) all direct towards the database &amp;quot;traffic&amp;quot; running on this server.&lt;br /&gt;
* cancelled customer systems are compressed and stored under &amp;lt;tt&amp;gt;/data/deprecated&amp;lt;/tt&amp;gt;&lt;br /&gt;
* archived bwdb2 flow files are stored under &amp;lt;tt&amp;gt;/data/bwdb2&amp;lt;/tt&amp;gt;&lt;br /&gt;
* critical files from backup2 are stored under &amp;lt;tt&amp;gt;/data/backup2&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
00 5 * * * /usr/local/sbin/backupwatch.pl 2&amp;gt;&amp;amp;1 &amp;gt; /dev/null&lt;br /&gt;
35 5 * * * /usr/local/sbin/usage_check; /usr/local/sbin/snapshot_archive; /usr/local/sbin/snapshot_rotate  /data/backuplog.log&amp;lt;/pre&amp;gt;&lt;br /&gt;
this runs daily the scripts to report on how much disk space each customer system occupies and how long their backups took. Then it rotates backups for each system, removing the oldest backup. It will email support@johncompanies.com at it’s conclusion. This email can be deleted, however note when it begins to take significantly longer to complete, ie runs past 2200 pm – this usually indicates a problem on the backup server.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;10,25,40,55 * * * * /usr/local/sbin/processsql.pl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
this processes prepared sql command files sent from/by bwdb2 (@ i2b) and imports them into the traffic database.&lt;br /&gt;
&amp;lt;pre&amp;gt;0 0 * * * /usr/local/sbin/3wraidchk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
checks the health of the RAID array&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Free_up_space_on_backup1|Remove old backups]]&lt;br /&gt;
*[[Routine_Maintenance#3ware|Check on auto-verify]]&lt;br /&gt;
&lt;br /&gt;
== build ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;Setup raid5 with a boot vol of 12G 5.45tb &lt;br /&gt;
12G boot&lt;br /&gt;
4664 GB &lt;br /&gt;
&lt;br /&gt;
Install ubuntu 8.04&lt;br /&gt;
&lt;br /&gt;
Swap 4G&lt;br /&gt;
&lt;br /&gt;
Don’t format data drive&lt;br /&gt;
&lt;br /&gt;
http://www.unixgods.org/~tilo/linux_larger_2TB.html&lt;br /&gt;
&lt;br /&gt;
parted /dev/sdb&lt;br /&gt;
print&lt;br /&gt;
mklabel gpt&lt;br /&gt;
print&lt;br /&gt;
&lt;br /&gt;
#Disk /dev/sdb: 4987GB&lt;br /&gt;
#Sector size (logical/physical): 512B/512B&lt;br /&gt;
#Partition Table: gpt&lt;br /&gt;
&lt;br /&gt;
#Number  Start  End  Size  File system  Name  Flags&lt;br /&gt;
&lt;br /&gt;
mkpart primary ext3 0 4987GB&lt;br /&gt;
print&lt;br /&gt;
&lt;br /&gt;
#Disk /dev/sdb: 5987GB&lt;br /&gt;
#Sector size (logical/physical): 512B/512B&lt;br /&gt;
#Partition Table: gpt&lt;br /&gt;
&lt;br /&gt;
#Number  Start   End     Size    File system  Name     Flags&lt;br /&gt;
# 1      17.4kB  4987GB  4987GB               primary&lt;br /&gt;
&lt;br /&gt;
quit&lt;br /&gt;
&lt;br /&gt;
mkfs.ext3 /dev/sdb1&lt;br /&gt;
#mke2fs 1.40.8 (13-Mar-2008)&lt;br /&gt;
#Filesystem label=&lt;br /&gt;
#OS type: Linux&lt;br /&gt;
#Block size=4096 (log=2)&lt;br /&gt;
#Fragment size=4096 (log=2)&lt;br /&gt;
#304390144 inodes, 1217544183 blocks&lt;br /&gt;
#60877209 blocks (5.00%) reserved for the super user&lt;br /&gt;
#First data block=0&lt;br /&gt;
#Maximum filesystem blocks=0&lt;br /&gt;
#37157 block groups&lt;br /&gt;
#32768 blocks per group, 32768 fragments per group&lt;br /&gt;
#8192 inodes per group&lt;br /&gt;
#Superblock backups stored on blocks:&lt;br /&gt;
#        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,&lt;br /&gt;
#        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,&lt;br /&gt;
#        102400000, 214990848, 512000000, 550731776, 644972544&lt;br /&gt;
#&lt;br /&gt;
#Writing inode tables:   967/37157&lt;br /&gt;
&lt;br /&gt;
mkdir /data&lt;br /&gt;
&lt;br /&gt;
#root@backup1:~# df -h&lt;br /&gt;
#Filesystem            Size  Used Avail Use% Mounted on&lt;br /&gt;
#/dev/sda2             8.3G  540M  7.3G   7% /&lt;br /&gt;
#varrun               1013M   40K 1013M   1% /var/run&lt;br /&gt;
#varlock              1013M     0 1013M   0% /var/lock&lt;br /&gt;
#udev                 1013M   56K 1013M   1% /dev&lt;br /&gt;
#devshm               1013M     0 1013M   0% /dev/shm&lt;br /&gt;
#/dev/sdb1             4.5T  192M  4.3T   1% /data&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get upgrade&lt;br /&gt;
apt-get install snmp snmpd ntp nfs-kernel-server&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;\&amp;quot;\e[5~\&amp;quot;: history-search-backward&amp;quot; &amp;gt;&amp;gt; ~/.inputrc&lt;br /&gt;
echo &amp;quot;\&amp;quot;\e[6~\&amp;quot;: history-search-forward&amp;quot; &amp;gt;&amp;gt; ~/.inputrc&lt;br /&gt;
&lt;br /&gt;
vi /etc/ntp.conf&lt;br /&gt;
server 10.1.4.5&lt;br /&gt;
&lt;br /&gt;
scp root@10.1.4.3:/root/.ssh/authorized_keys /root/.ssh/&lt;br /&gt;
cd /root/&lt;br /&gt;
ssh-keygen -t dsa&lt;br /&gt;
echo &amp;quot;10.1.4.3        backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
&lt;br /&gt;
cat .ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
ssh backup2&lt;br /&gt;
&lt;br /&gt;
vi /root/.bashrc&lt;br /&gt;
export PS1=&amp;quot;[\u@\h \w]# &amp;quot;&lt;br /&gt;
alias h=&#039;history&#039;&lt;br /&gt;
alias vi=&#039;vim&#039;&lt;br /&gt;
alias j=&#039;jobs&#039;&lt;br /&gt;
export PS1=&amp;quot;[\u@\h \w]# &amp;quot;&lt;br /&gt;
alias dr=&#039;screen -dr&#039;&lt;br /&gt;
export EDITOR=vim&lt;br /&gt;
export GREP_OPTIONS=&#039;--color=auto&#039;&lt;br /&gt;
export HISTFILESIZE=1000&lt;br /&gt;
&lt;br /&gt;
source /root/.bashrc&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;# ttyS0 - getty &lt;br /&gt;
# &lt;br /&gt;
# This service maintains a getty on ttyS0 from the point the system is&lt;br /&gt;
# started until it is shut down again.&lt;br /&gt;
&lt;br /&gt;
start on runlevel 2&lt;br /&gt;
start on runlevel 3&lt;br /&gt;
start on runlevel 4&lt;br /&gt;
start on runlevel 5&lt;br /&gt;
&lt;br /&gt;
stop on runlevel 0&lt;br /&gt;
stop on runlevel 1&lt;br /&gt;
stop on runlevel 6&lt;br /&gt;
&lt;br /&gt;
respawn&lt;br /&gt;
exec /sbin/getty 38400 ttyS0&amp;quot; &amp;gt; /etc/event.d/ttyS0 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vi /boot/grub/menu.lst&lt;br /&gt;
&lt;br /&gt;
serial --unit=0 --speed=38400 --word=8 --parity=no --stop=1&lt;br /&gt;
terminal --timeout=15 serial console&lt;br /&gt;
&lt;br /&gt;
append to kernel lines: &lt;br /&gt;
console=tty0 console=ttyS0,38400n8&lt;br /&gt;
&lt;br /&gt;
show menu:&lt;br /&gt;
#hiddenmenu&lt;br /&gt;
&lt;br /&gt;
echo &#039;rocommunity  jcread 10.1.4.5&lt;br /&gt;
rocommunity  jcread 10.1.4.3&lt;br /&gt;
agentaddress 10.1.4.8:161&#039; &amp;gt; /etc/snmp/snmpd.conf&lt;br /&gt;
&lt;br /&gt;
# to see which iface it is, on backup2:&lt;br /&gt;
&lt;br /&gt;
snmpwalk -v 1 -c jcread 10.1.4.8 interface&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;bb:x:1984:1984:Big Brother:/home/bb:/bin/bash&amp;quot; &amp;gt;&amp;gt; /etc/passwd&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;bb:x:1984:&amp;quot; &amp;gt;&amp;gt; /etc/group&lt;br /&gt;
&lt;br /&gt;
pwconv&lt;br /&gt;
&lt;br /&gt;
mkdir /home/bb&lt;br /&gt;
chown bb.bb /home/bb&lt;br /&gt;
&lt;br /&gt;
cd ~bb&lt;br /&gt;
scp backup2:/mnt/data4/build/bb/bb-linux.tar .&lt;br /&gt;
&lt;br /&gt;
tar xf bb-linux.tar&lt;br /&gt;
&lt;br /&gt;
cd /home/bb/bbc1.9e-btf/etc&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;10.1.4.5 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
echo &amp;quot;10.1.4.8 backup1.johncompanies.com # ssh&amp;quot; &amp;gt;&amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;/:90:95&lt;br /&gt;
/var:90:95&lt;br /&gt;
/data:85:99&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-dftab&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
vi /home/bb/bbc1.9e-btf/bin/bb-disk.sh&lt;br /&gt;
(remove all | SORT xxxx)&lt;br /&gt;
&lt;br /&gt;
chmod +r /var/log/messages&lt;br /&gt;
&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
#(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
#(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&lt;br /&gt;
&lt;br /&gt;
#make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
vi /etc/rc.local&lt;br /&gt;
su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &#039;/data 10.1.4.0/24(rw, no_root_squash,async,no_subtree_check)&#039; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
&lt;br /&gt;
/etc/init.d/nfs-kernel-server restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &#039;chmod o+r /var/log/messages&#039; &amp;gt;&amp;gt; /etc/cron.weekly/sysklogd&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
echo &#039;10.1.4.8                backup1&#039; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &#039;/dev/sdb1	/data  ext3  rw,noatime  0  0&#039; &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
to install digi drivers:&lt;br /&gt;
&lt;br /&gt;
wget http://ftp1.digi.com/support/driver/40002086_n.tgz&lt;br /&gt;
apt-get install linux-image-2.6.24-19-server&lt;br /&gt;
apt-get install linux-source-2.6.24 (not needed?)&lt;br /&gt;
apt-get install linux-headers-2.6.24-19-server &lt;br /&gt;
apt-get install make&lt;br /&gt;
apt-get install gcc&lt;br /&gt;
apt-get install g++	&lt;br /&gt;
apt-get install libncurses5-dev&lt;br /&gt;
apt-get install expect&lt;br /&gt;
apt-get install libdbi-perl libdate-calc-perl libdbd-mysql-perl&lt;br /&gt;
&lt;br /&gt;
cd /usr/src; ln -s linux-headers-2.6.24-19-server linux&lt;br /&gt;
./configure&lt;br /&gt;
make all&lt;br /&gt;
make install&lt;br /&gt;
make postinstall&lt;br /&gt;
&lt;br /&gt;
/usr/bin/dgrp_cfg_node -v -v init el 65.116.11.2 8&lt;br /&gt;
&lt;br /&gt;
apt-get install mysql&lt;br /&gt;
&lt;br /&gt;
mkdir /data/mysql&lt;br /&gt;
chown mysql:mysql /data/mysql&lt;br /&gt;
/etc/init.d/mysql stop&lt;br /&gt;
mv /var/lib/mysql/* /data/mysql/&lt;br /&gt;
mv /data/mysql/ib_* /var/lib/mysql/&lt;br /&gt;
vi /etc/mysql/my.cnf&lt;br /&gt;
(change datadir to /data/mysql)&lt;br /&gt;
vi /etc/apparmor.d/usr.sbin.mysqld&lt;br /&gt;
add:&lt;br /&gt;
  /data/mysql/ r,&lt;br /&gt;
  /data/mysql/** rwk,&lt;br /&gt;
Comment out:&lt;br /&gt;
#  /var/lib/mysql/ r,&lt;br /&gt;
#  /var/lib/mysql/** rwk,&lt;br /&gt;
&lt;br /&gt;
/etc/init.d/apparmor restart&lt;br /&gt;
/etc/init.d/mysql start&lt;br /&gt;
&lt;br /&gt;
tw_cli /c0/u0 set ignoreECC=on&lt;br /&gt;
tw_cli /c0/u0 set storsave=balance&lt;br /&gt;
tw_cli /c0/u0 set cache=on&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
0 0 * * * /usr/local/sbin/3wraidchk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= backup2 =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine is used for archiving data and is a backup server for colo customers. It was the former primary backup location for all VPS-based customers before backup1 was installed. Only dedicated customers directly accesses this server to perform their backups. NOTE: power button is broken, so the reset button (paper clip) was rewired to be the power button.&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeBSD 6.1 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.4.3, Pub IP: 69.55.230.10 (firewalled from all but JC infrastructure @ i2b)&lt;br /&gt;
* Hardware: 16 IDE drive bays (4 columns of 4, drive 0-0 top left, drive 0-1 just to the right TODO) all hot-swap. Triple power supply. &lt;br /&gt;
* Drives: &lt;br /&gt;
**3ware 7500-8:&lt;br /&gt;
***200 GB JBOD (1 x 200G) labeled 0-0&lt;br /&gt;
***500 GB RAID5 (3 x 250G) 0-1 thru 0-3&lt;br /&gt;
***700 GB RAID5 (4 x 250G) 0-4 thru 0-7&lt;br /&gt;
**3ware 7500-8:&lt;br /&gt;
***700 GB RAID5 (4 x 250G) 1-0 thru 1-3&lt;br /&gt;
***700 GB RAID5 (4 x 250G) 1-4 thru 1-7&lt;br /&gt;
&lt;br /&gt;
All drives MUST be western digital IDE drives. Other brands will not fit.&lt;br /&gt;
&lt;br /&gt;
In case of an outage, nfs will hang on all connected servers until the nfs service returns. If you can&#039;t get backup2 back online, you can get nfs running elsewhere and fake backup2&#039;s MAC&#039;s: priv: 00:0e:0c:59:c1:a6, pub: 00:07:e9:5b:c6:45&lt;br /&gt;
&lt;br /&gt;
To configure:&lt;br /&gt;
 ifconfig fxp0 link 00:90:27:f9:0a:d9&lt;br /&gt;
&lt;br /&gt;
== Services provided ==&lt;br /&gt;
* backup via rsync and nfs&lt;br /&gt;
* samba&lt;br /&gt;
* nfs&lt;br /&gt;
* snmp&lt;br /&gt;
* bigbrother&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
* all data is stored under 4 mount points, corresponding to the 4 large RAID5 arrays: &amp;lt;tt&amp;gt;/mnt/data1 /mnt/data2 /mnt/data3 /mnt/data4&amp;lt;/tt&amp;gt;&lt;br /&gt;
* iso images provided for customers wanting to mount an ISO as a CDROM via the IPKVM are provided via samba on this server. Images live under &amp;lt;tt&amp;gt;/mnt/data2/iso&amp;lt;/tt&amp;gt;&lt;br /&gt;
* this used to be our primary backup server so you will see old backups from virt and jails around- missing customer data though, just the machine&#039;s data&lt;br /&gt;
* this server serves as an archive for exported db data from bwdb and old flow files.&lt;br /&gt;
* isys backs up here&lt;br /&gt;
* customers are nfs-moutned under /mnt/data3/customers as file-backed md devices&lt;br /&gt;
* in &amp;lt;tt&amp;gt;/mnt/data4&amp;lt;/tt&amp;gt; there are lots of useful things used for building our vps servers, customer servers, and management scripts:&lt;br /&gt;
** &amp;lt;tt&amp;gt;/bin&amp;lt;/tt&amp;gt;: the master repository of scripts and custom binaries we use on jails and virts. Each night every virt and jail rsync&#039;s what&#039;s in here to update the local files. So any global updates to scripts would need to be made here (or will be overwritten with what&#039;s in here)&lt;br /&gt;
** &amp;lt;tt&amp;gt;/build&amp;lt;/tt&amp;gt;: files we use for setting up big brother, 3ware cli and scripts for colo&#039;s, vzcp customized setup files and so on&lt;br /&gt;
** &amp;lt;tt&amp;gt;/vzrpms&amp;lt;/tt&amp;gt;: contains the OS templates for many-to-most of the OS&#039;s we offer on vz systems&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
* backs itself up nightly to nfs-mounted backup1 (mountpoint: &amp;lt;tt&amp;gt;/backup2&amp;lt;/tt&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#3ware|Check on health]]&lt;br /&gt;
&lt;br /&gt;
= backup3 =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine is used for archiving data, is a backup server for colo customers, runs a samba server to make available iso&#039;s to the IPKVMs, and allows us to connect to the digi serial multiplexer at i2b. Only dedicated customers directly accesses this server to perform their backups. &lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab 6&lt;br /&gt;
* OS: Ubuntu 10.04.1 server amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.2.3, Pub IPs: 69.55.229.4 AND 69.55.231.2&lt;br /&gt;
* Hardware: 16 drive SATA bays (4 columns of 4, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: 5 TB (6 x 1TB) RAID5 array running on an Areca Technology Corp. ARC-1160 16-Port&lt;br /&gt;
&lt;br /&gt;
== Services provided ==&lt;br /&gt;
* backup via rsync and nfs&lt;br /&gt;
* samba&lt;br /&gt;
* nfs&lt;br /&gt;
* digi realport&lt;br /&gt;
* snmp&lt;br /&gt;
* bigbrother&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
* all data is stored under /data&lt;br /&gt;
* iso images provided for customers wanting to mount an ISO as a CDROM via the IPKVM are provided via samba on this server. Images live under &amp;lt;tt&amp;gt;/data/iso&amp;lt;/tt&amp;gt;&lt;br /&gt;
* this server serves as an archive for exported db data from bwdb and old flow files.&lt;br /&gt;
* inftrastructure machines at i2b back up here&lt;br /&gt;
* customers are nfs-moutned under /data/customers as file-backed loopback devices&lt;br /&gt;
&lt;br /&gt;
== management scripts ==&lt;br /&gt;
* mkbackups&lt;br /&gt;
&lt;br /&gt;
mkbackup &amp;lt;cid&amp;gt; GB &amp;lt;ip&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
 0 0 * * * /usr/local/sbin/arecaraidchk&lt;br /&gt;
RAID checks&lt;br /&gt;
&lt;br /&gt;
 35 4 * * * /usr/local/sbin/snapshot_archive&lt;br /&gt;
Rotate daily snapshots for infrastructure machine backups&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#Areca|Check on RAID health]]&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&lt;br /&gt;
=== BIOS Config ===&lt;br /&gt;
disable quiet boot&lt;br /&gt;
&lt;br /&gt;
set to last state after power loss&lt;br /&gt;
&lt;br /&gt;
set date/time to GMT&lt;br /&gt;
&lt;br /&gt;
enable serial console output (baud rate 115200)&lt;br /&gt;
&lt;br /&gt;
=== Install OS ===&lt;br /&gt;
&amp;lt;pre&amp;gt;Ubuntu 10.04.1 amd64 (couldn&#039;t get 12.04 to load cause the H/W was incompat)&lt;br /&gt;
10G / ext3&lt;br /&gt;
2G swap&lt;br /&gt;
~ /data ext4&lt;br /&gt;
&lt;br /&gt;
Install packages:&lt;br /&gt;
openssh&lt;br /&gt;
samba&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== DNS and private IP ===&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;nameserver 69.55.225.225&amp;quot; &amp;gt;&amp;gt; /etc/resolv.conf&lt;br /&gt;
&lt;br /&gt;
Add a 2nd IP to eth0 and setup priv net&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/network/interfaces&lt;br /&gt;
&lt;br /&gt;
auto eth0&lt;br /&gt;
iface eth0 inet static&lt;br /&gt;
        address 69.55.229.4&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.229.0&lt;br /&gt;
        broadcast 69.55.229.255&lt;br /&gt;
        gateway 69.55.229.1&lt;br /&gt;
        # dns-* options are implemented by the resolvconf package, if installed&lt;br /&gt;
        dns-nameservers 69.55.229.3 66.181.0.2&lt;br /&gt;
        dns-search johncompanies.com&lt;br /&gt;
&lt;br /&gt;
auto eth0:1&lt;br /&gt;
iface eth0:1 inet static&lt;br /&gt;
        address 69.55.231.2&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 69.55.231.0&lt;br /&gt;
        broadcast 69.55.231.255&lt;br /&gt;
&lt;br /&gt;
auto eth1&lt;br /&gt;
iface eth1 inet static&lt;br /&gt;
        address 10.1.2.3&lt;br /&gt;
        netmask 255.255.255.0&lt;br /&gt;
        network 10.1.2.0&lt;br /&gt;
        broadcast 10.1.2.255&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Install packages ===&lt;br /&gt;
&amp;lt;pre&amp;gt;apt-get update&lt;br /&gt;
apt-get upgrade&lt;br /&gt;
apt-get install gcc&lt;br /&gt;
apt-get install libssl-dev&lt;br /&gt;
apt-get install libncurses5-dev&lt;br /&gt;
apt-get install cu&lt;br /&gt;
apt-get install unzip&lt;br /&gt;
apt-get install snmp snmpd ntp nfs-kernel-server&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== tweak grub, enable serial ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/default/grub&lt;br /&gt;
#GRUB_HIDDEN_TIMEOUT=0&lt;br /&gt;
GRUB_CMDLINE_LINUX_DEFAULT=&amp;quot;max_loop=64&amp;quot;&lt;br /&gt;
GRUB_CMDLINE_LINUX=&amp;quot;console=tty0 console=ttyS0&amp;quot;&lt;br /&gt;
update-grub&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;start on stopped rc RUNLEVEL=[2345]&lt;br /&gt;
stop on runlevel [!2345]&lt;br /&gt;
respawn&lt;br /&gt;
exec /sbin/getty -L ttyS0 38400 vt102&amp;quot; &amp;gt; /etc/init/ttyS0.conf&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== install realport (digi) driver ===&lt;br /&gt;
&lt;br /&gt;
give the digi an ip with DgIpServ.exe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /usr/src/&lt;br /&gt;
wget ftp://ftp1.digi.com/support/beta/linux/dgrp/dgrp-1.9.tgz&lt;br /&gt;
tar xzf dgrp-1.9.tgz &lt;br /&gt;
cd dgrp-1.9/&lt;br /&gt;
./configure&lt;br /&gt;
make&lt;br /&gt;
make install&lt;br /&gt;
make postinstall&lt;br /&gt;
update-rc.d dgrp_daemon defaults&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
configure ports:&lt;br /&gt;
 dgrp_cfg_node init el 10.1.2.10 16&lt;br /&gt;
&lt;br /&gt;
try connecting with:&lt;br /&gt;
 cu -l /dev/ttyel00 -s 38400&lt;br /&gt;
&lt;br /&gt;
=== shell, ntp, ssh key, hosts ===&lt;br /&gt;
&lt;br /&gt;
Shell autocompletion search:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;\&amp;quot;\e[5~\&amp;quot;: history-search-backward&amp;quot; &amp;gt;&amp;gt; ~/.inputrc&lt;br /&gt;
echo &amp;quot;\&amp;quot;\e[6~\&amp;quot;: history-search-forward&amp;quot; &amp;gt;&amp;gt; ~/.inputrc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Setup ntp:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/ntp.conf&lt;br /&gt;
server 10.1.2.1&lt;br /&gt;
server ntp.ubuntu.com&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Generate ssh keys:&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /root/&lt;br /&gt;
ssh-keygen -t dsa&amp;lt;/pre&amp;gt;&lt;br /&gt;
Defaults, no password&lt;br /&gt;
&lt;br /&gt;
Setup hosts:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.4 bwdb2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy keys to servers where we need passwordless login:&lt;br /&gt;
&amp;lt;pre&amp;gt;cat .ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat .ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039;&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Setup shell:&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /root/.bashrc&lt;br /&gt;
(add to bottom)&lt;br /&gt;
alias h=&#039;history&#039;&lt;br /&gt;
alias vi=&#039;vim&#039;&lt;br /&gt;
alias j=&#039;jobs&#039;&lt;br /&gt;
export PS1=&amp;quot;[\u@\h \w]# &amp;quot;&lt;br /&gt;
alias dr=&#039;screen -dr&#039;&lt;br /&gt;
export EDITOR=vim&lt;br /&gt;
export GREP_OPTIONS=&#039;--color=auto&#039;&lt;br /&gt;
export HISTFILESIZE=1000&lt;br /&gt;
&lt;br /&gt;
alias tip-switch-p20=&#039;cu -l ttyel00 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p21=&#039;cu -l ttyel15 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p22=&#039;cu -l ttyel14 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p23=&#039;cu -l ttyel05 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p24=&#039;cu -l ttyel06 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p25=&#039;cu -l ttyel09 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p26=&#039;cu -l ttyel07 -s 9600&#039;&lt;br /&gt;
alias tip-switch-p27=&#039;cu -l ttyel08 -s 9600&#039;&lt;br /&gt;
alias tip-firewall2=&#039;cu -l ttyel01 -s 115200&#039;&lt;br /&gt;
alias tip-nat2=&#039;cu -l /dev/ttyel02 -s 115200&#039;&lt;br /&gt;
alias tip-backup3=&#039;cu -l ttyel04 -s 38400&#039;&lt;br /&gt;
alias tip-bwdb2=&#039;cu -l ttyel03 -s 115200&#039;&lt;br /&gt;
alias tip-backup4=&#039;cu -l ttyel13 -s 115200&#039;&lt;br /&gt;
alias tip-jail3=&#039;cu -l ttyel11 -s 115200&#039;&lt;br /&gt;
&lt;br /&gt;
Load new shell:&lt;br /&gt;
 source /root/.bashrc&lt;br /&gt;
&lt;br /&gt;
Setup snmpd (this is only valid for a server at castle):&lt;br /&gt;
echo &#039;rocommunity  jcread 10.1.4.5&lt;br /&gt;
rocommunity  jcread 10.1.4.3&lt;br /&gt;
agentaddress 10.1.4.8:161&#039; &amp;gt; /etc/snmp/snmpd.conf&lt;br /&gt;
&lt;br /&gt;
to see which iface it is, on backup2:&lt;br /&gt;
&lt;br /&gt;
snmpwalk -v 1 -c jcread 10.1.4.8 interface&lt;br /&gt;
&lt;br /&gt;
=== nfs ===&lt;br /&gt;
&lt;br /&gt;
Allow mounts from private net:&lt;br /&gt;
 echo &#039;/data 10.1.2.0/24(rw,no_root_squash,async,no_subtree_check)&#039; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
&lt;br /&gt;
Restart nfsd:&lt;br /&gt;
 /etc/init.d/nfs-kernel-server restart&lt;br /&gt;
&lt;br /&gt;
=== bb ===&lt;br /&gt;
&lt;br /&gt;
Add user, group:&lt;br /&gt;
 echo &amp;quot;bb:x:1984:1984:Big Brother:/home/bb:/bin/bash&amp;quot; &amp;gt;&amp;gt; /etc/passwd&lt;br /&gt;
 echo &amp;quot;bb:x:1984:&amp;quot; &amp;gt;&amp;gt; /etc/group&lt;br /&gt;
 pwconv&lt;br /&gt;
&lt;br /&gt;
Create home:&lt;br /&gt;
 mkdir /home/bb&lt;br /&gt;
 chown bb.bb /home/bb&lt;br /&gt;
 cd ~bb&lt;br /&gt;
&lt;br /&gt;
Copy over and install files:&lt;br /&gt;
&amp;lt;pre&amp;gt;scp backup2:/mnt/data4/build/bb/bb-linux.tar .&lt;br /&gt;
tar xf bb-linux.tar&lt;br /&gt;
cd /home/bb/bbc1.9e-btf/etc&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Configure main bb server:&lt;br /&gt;
 echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
 echo &amp;quot;10.1.2.3 backup3.johncompanies.com # ssh&amp;quot; &amp;gt;&amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
Configure low disk alerts:&lt;br /&gt;
&amp;lt;pre&amp;gt;echo &amp;quot;/:90:95&lt;br /&gt;
/var:90:95&lt;br /&gt;
/data:85:99&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-dftab&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 vi /home/bb/bbc1.9e-btf/bin/bb-disk.sh&lt;br /&gt;
(remove all | SORT xxxx since SORT is broken)&lt;br /&gt;
&lt;br /&gt;
 chmod +r /var/log/messages&lt;br /&gt;
&lt;br /&gt;
 ./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
 ./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
&amp;lt;pre&amp;gt;cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT&amp;lt;/pre&amp;gt; &lt;br /&gt;
(look for errors)&lt;br /&gt;
 exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi /etc/rc.local&lt;br /&gt;
su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(before the exit 0)&lt;br /&gt;
&lt;br /&gt;
 echo &#039;chmod o+r /var/log/messages&#039; &amp;gt;&amp;gt; /etc/cron.weekly/sysklogd&lt;br /&gt;
&lt;br /&gt;
Add f/w rule:&lt;br /&gt;
 ipfw add 00096 allow ip from { 69.55.229.4 or 69.55.229.3 } to 69.55.230.2 1984&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;vi ~bb/bbc1.9e-btf/etc/bbdef-client.sh&lt;br /&gt;
DFWARN=199&lt;br /&gt;
DFPANIC=199&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== raid check ===&lt;br /&gt;
&lt;br /&gt;
==== 3ware ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
scp backup1:/usr/local/sbin/tw_cli /usr/local/sbin/tw_cli&lt;br /&gt;
scp backup1:/usr/local/sbin/checkraid.sh /usr/local/sbin/checkraid.sh&lt;br /&gt;
scp backup1:/usr/local/sbin/3wraidchk /usr/local/sbin/3wraidchk&lt;br /&gt;
vi /usr/local/sbin/checkraid.sh&lt;br /&gt;
:%s/c0/c2/g&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
0 0 * * * /usr/local/sbin/3wraidchk&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== areca ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /tmp&lt;br /&gt;
wget http://www.areca.us/support/s_linux/cli/linuxcli_V1.10.0_120815.zip&lt;br /&gt;
unzip linuxcli_V1.10.0_120815.zip&lt;br /&gt;
cp linuxcli_V1.10.0_120815/x86_64/cli64 /usr/local/sbin/&lt;br /&gt;
chmod 0700 /usr/local/sbin/cli64&lt;br /&gt;
cli64 rsf info&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;scp backup2:/data4/bin/arecaraidchk /usr/local/sbin&lt;br /&gt;
scp backup1:/usr/local/sbin/Sendmail.pm /usr/local/sbin&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
0 0 * * * /usr/local/sbin/arecaraidchk&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /root/verify.sh&lt;br /&gt;
cli64 vsf info&lt;br /&gt;
cli64 rsf info&lt;br /&gt;
cli64 disk info&lt;br /&gt;
cli64 event info&lt;br /&gt;
echo press enter when ready to run verify ; read x&lt;br /&gt;
&lt;br /&gt;
cli64 vsf check vol=1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== misc binaries ===&lt;br /&gt;
&lt;br /&gt;
 scp backup1:/usr/local/sbin/snapshot_archive /usr/local/sbin/snapshot_archive&lt;br /&gt;
 vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
(remove entries)&lt;br /&gt;
&lt;br /&gt;
 crontab -e&lt;br /&gt;
 35 4 * * * /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
 scp backup1:/usr/local/sbin/pagedave /usr/local/sbin/pagedave&lt;br /&gt;
 scp backup1:/usr/local/sbin/taskdone /usr/local/sbin/taskdone&lt;br /&gt;
&lt;br /&gt;
Since installing /bin/mail requires all sorts of packages (lame) we write a simple one here...which can only email johncompanies.com addr&#039;s unless you add relaying for this host:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat &amp;gt; /bin/mail&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
use strict;&lt;br /&gt;
use warnings;&lt;br /&gt;
&lt;br /&gt;
use lib &#039;/usr/local/sbin&#039;;&lt;br /&gt;
use Sendmail qw(sendmail);&lt;br /&gt;
&lt;br /&gt;
my $sub = $ARGV[1];&lt;br /&gt;
my $to = $ARGV[2];&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
my %mail = (&lt;br /&gt;
   To      =&amp;gt; $to,&lt;br /&gt;
   From    =&amp;gt; $to,&lt;br /&gt;
   Subject =&amp;gt; $sub,&lt;br /&gt;
   Message =&amp;gt; &#039;&#039;,&lt;br /&gt;
   smtp    =&amp;gt; &#039;mail.johncompanies.com&#039;&lt;br /&gt;
);&lt;br /&gt;
sendmail(%mail) || print &amp;quot;Error: $Sendmail::error&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /bin/mail&lt;br /&gt;
&lt;br /&gt;
=== mkbackup ===&lt;br /&gt;
&lt;br /&gt;
 mkdir /data/customers&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cat &amp;gt; /usr/local/sbin/mkbackup&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
&lt;br /&gt;
if test $1; then&lt;br /&gt;
  cid=$1&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;ERROR: Usage: mkbackup cid GB ip  Terminating.&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if test $2; then&lt;br /&gt;
  gb=$2&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;ERROR: Usage: mkbackup cid GB ip  Terminating.&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if test $3; then&lt;br /&gt;
  ip=$3&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;ERROR: Usage: mkbackup cid GB ip  Terminating.&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if test -e /data/customers/${cid}-file; then&lt;br /&gt;
  echo &amp;quot;ERROR: /data/customers/${cid}-file exists&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;touch /data/customers/${cid}-file&amp;quot;&lt;br /&gt;
  touch /data/customers/${cid}-file&lt;br /&gt;
  count=`echo $gb|awk &#039;{print $1*1000}&#039;`&lt;br /&gt;
  echo &amp;quot;dd if=/dev/zero of=/data/customers/${cid}-file bs=1024K count=$count&amp;quot;&lt;br /&gt;
  dd if=/dev/zero of=/data/customers/${cid}-file bs=1024K count=$count&lt;br /&gt;
  echo &amp;quot;/sbin/mkfs -t ext3 -F -j -q /data/customers/${cid}-file&amp;quot;&lt;br /&gt;
  /sbin/mkfs -t ext3 -F -j -q /data/customers/${cid}-file&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
if test -e /data/customers/$cid; then&lt;br /&gt;
  echo &amp;quot;ERROR: /data/customers/$cid exists&amp;quot;&lt;br /&gt;
  exit&lt;br /&gt;
else&lt;br /&gt;
  echo &amp;quot;mkdir /data/customers/${cid}&amp;quot;&lt;br /&gt;
  mkdir /data/customers/${cid}&lt;br /&gt;
  echo &amp;quot;mount -o loop /data/customers/${cid}-file /data/customers/$cid&amp;quot;&lt;br /&gt;
  mount -o loop /data/customers/${cid}-file /data/customers/$cid&lt;br /&gt;
  df -h /data/customers/$cid&lt;br /&gt;
&lt;br /&gt;
  echo &amp;quot;fsck -y /data/customers/${cid}-file&amp;quot; &amp;gt;&amp;gt; /etc/nfs_backup_mounts.sh&lt;br /&gt;
  echo &amp;quot;mount -o loop /data/customers/${cid}-file /data/customers/$cid&amp;quot; &amp;gt;&amp;gt; /etc/nfs_backup_mounts.sh&lt;br /&gt;
  echo &amp;quot;&amp;quot; &amp;gt;&amp;gt; /etc/nfs_backup_mounts.sh&lt;br /&gt;
&lt;br /&gt;
  echo &amp;quot;/data/customers/$cid $ip/32(rw,no_root_squash,async,no_subtree_check)&amp;quot; &amp;gt;&amp;gt; /etc/exports&lt;br /&gt;
  /etc/init.d/nfs-kernel-server restart&lt;br /&gt;
  tail /var/log/messages&lt;br /&gt;
fi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 chmod 0700 /usr/local/sbin/mkbackup&lt;br /&gt;
&lt;br /&gt;
 vi /etc/rc.local&lt;br /&gt;
add:&lt;br /&gt;
 /etc/nfs_backup_mounts.sh&lt;br /&gt;
&lt;br /&gt;
=== samba ===&lt;br /&gt;
&lt;br /&gt;
 apt-get install samba&lt;br /&gt;
&lt;br /&gt;
 vi /etc/samba/smb.conf&lt;br /&gt;
&lt;br /&gt;
; comment out any mounts, add:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[data]&lt;br /&gt;
   read only = yes&lt;br /&gt;
   locking = no&lt;br /&gt;
   path = /data/iso&lt;br /&gt;
   guest ok = yes&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 /etc/init.d/smbd restart&lt;br /&gt;
&lt;br /&gt;
 mkdir /data/iso&lt;br /&gt;
&lt;br /&gt;
Bring over some stuff from backup2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;cd /data/iso&lt;br /&gt;
scp backup2:/d2/iso/3wfirmware.iso .&lt;br /&gt;
scp backup2:/d2/iso/MD5SUMS .&lt;br /&gt;
scp backup2:/d2/iso/bootimg.iso .&lt;br /&gt;
scp backup2:/d2/iso/systemrescuecd-x86-0.2.19.iso .&lt;br /&gt;
scp backup2:/d2/iso/win98bootcd.iso .&lt;br /&gt;
scp backup2:/d2/iso/acronis_bootdisk.iso .&lt;br /&gt;
scp backup2:/d2/iso/memtest86-3.2.iso .&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Moving from one server to another ===&lt;br /&gt;
&lt;br /&gt;
Here are the steps you would take to move settings and data from one server to a new backup server:&lt;br /&gt;
&lt;br /&gt;
* rsync over all /data/customers (we do this cause if we didn&#039;t use *-file it would copy over the files AND the data in the mountpoint)&lt;br /&gt;
 rsync -av --progress --ignore-times *-file root@10.1.2.33:/data/customers/&lt;br /&gt;
after umounting all the customers, copy over the (empty) directories separately:&lt;br /&gt;
 for f in `find .  -type d`; do rsync -av $f root@69.55.229.25:/data/customers; done&lt;br /&gt;
&lt;br /&gt;
* copy mount script&lt;br /&gt;
 [root@backup3 /data/customers]# scp /etc/nfs_backup_mounts.sh root@69.55.229.25:/etc/nfs_backup_mounts.sh&lt;br /&gt;
&lt;br /&gt;
* copy rc.local&lt;br /&gt;
 [root@backup3 /data/customers]# scp /etc/rc.local root@69.55.229.25:/etc/rc.local&lt;br /&gt;
&lt;br /&gt;
* copy /etc/exports&lt;br /&gt;
 [root@backup3 /data/customers]# scp /etc/exports root@69.55.229.25:/etc/exports&lt;br /&gt;
&lt;br /&gt;
* edit /etc/hostname on both machines (set current to oldbackup3)&lt;br /&gt;
&lt;br /&gt;
* edit /etc/network/interfaces (swap IPs). &lt;br /&gt;
&lt;br /&gt;
* stop mounts from mounting on old and new servers so it doesnt start with reboot right away:&lt;br /&gt;
 chmod 000 /etc/nfs_backup_mounts.sh&lt;br /&gt;
&lt;br /&gt;
* reboot both servers @ same time&lt;br /&gt;
&lt;br /&gt;
* check everything out&lt;br /&gt;
&lt;br /&gt;
* run /etc/nfs_backup_mounts.sh on new server&lt;br /&gt;
&lt;br /&gt;
* if switch port changed update mrtg to reflect correct port pub nic is on (on p20):&lt;br /&gt;
 vi /usr/local/www/mgmt/mrtg/mrtg1.cfg&lt;br /&gt;
&lt;br /&gt;
= backup4 =&lt;br /&gt;
== Summary ==&lt;br /&gt;
This machine is used for archiving data, is a backup server for colo customers, runs a samba server to make available iso&#039;s to the IPKVMs.  Only FreeBSD virt customers directly accesses this server to perform their backups. &lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-7&lt;br /&gt;
* OS: FreeNAS 9.3 (FreeBSD 9.3)&lt;br /&gt;
* Networking: Priv IP: 10.1.2.9/24 AND 10.1.7.9/24,  Pub IPs: 69.55.230.6/24 &lt;br /&gt;
* Hardware: JC-08014&lt;br /&gt;
            Intel S5000VSA Motherboard&lt;br /&gt;
            1 x Intel Xeon E5410  @ 2.33GHz CPU&lt;br /&gt;
            3ware 9690SA-8I RAID Card w BBU&lt;br /&gt;
            16GB RAM&lt;br /&gt;
            Dual power supply.&lt;br /&gt;
* Drives: 7 TB (6 x 2TB) ZFS RAIDZ2 array running on JBOD&lt;br /&gt;
       1 128 GB SSD system drive and 6 drive SATA bays (3 columns of 2, drive 0 top left, drive 1 just below) all hot-swap.&lt;br /&gt;
&lt;br /&gt;
* GUI management at http://backup4.johncompanies.com&lt;br /&gt;
&lt;br /&gt;
== Services provided ==&lt;br /&gt;
* backup via rsync and nfs&lt;br /&gt;
* samba&lt;br /&gt;
* nfs&lt;br /&gt;
* snmp?&lt;br /&gt;
* bigbrother?&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
* all data is stored under /data&lt;br /&gt;
* iso images provided for customers wanting to mount an ISO as a CDROM via the IPKVM are provided via samba on this server. Images live under &amp;lt;tt&amp;gt;/data/iso&amp;lt;/tt&amp;gt;  ??&lt;br /&gt;
* this server serves as an archive for exported db data from bwdb and old flow files. ??&lt;br /&gt;
* customers are nfs-moutned under /data/users (/mnt/zfs/users) as zfs ?&lt;br /&gt;
&lt;br /&gt;
== management scripts ==&lt;br /&gt;
* mkbackups?&lt;br /&gt;
&lt;br /&gt;
mkbackup &amp;lt;cid&amp;gt; GB &amp;lt;ip&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
 0 0 * * * /usr/local/sbin/arecaraidchk  &lt;br /&gt;
RAID checks ?&lt;br /&gt;
&lt;br /&gt;
 35 4 * * * /usr/local/sbin/snapshot_archive&lt;br /&gt;
Rotate daily snapshots for infrastructure machine backups&lt;br /&gt;
&lt;br /&gt;
 00 15 * * * /usr/local/sbin/snapshot_rotate&lt;br /&gt;
Rotate daily snapshots for customer machine backups&lt;br /&gt;
&lt;br /&gt;
== Regular maintenance ==&lt;br /&gt;
*[[Routine_Maintenance#A|Check on RAID health]]&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&lt;br /&gt;
= console = &lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This box&#039;s only purpose is to serve as a means to connect to the digi serial multiplexer boxes at castle. Connect to it using the blue (cisco) ribbon cable with the beige RJ-45 to serial connector, 9600 8N1.&lt;br /&gt;
&lt;br /&gt;
* Location: castle, cab 3-8&lt;br /&gt;
* OS: SunOS 5.8 (solaris)&lt;br /&gt;
* Networking: Priv IP: 10.1.4.4&lt;br /&gt;
* Hardware: Sun Netra&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To connect to consoles, ssh in as user &#039;console&#039; and use the &amp;lt;tt&amp;gt;tip&amp;lt;/tt&amp;gt; command to connect to devices listed in &amp;lt;tt&amp;gt;/etc/remote&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
i.e.&lt;br /&gt;
 tip switch-p1&lt;br /&gt;
 tip jail1&lt;br /&gt;
&lt;br /&gt;
== Configuring digi/ports == &lt;br /&gt;
&lt;br /&gt;
=== /etc/remote ===&lt;br /&gt;
This is where the configuration/mapping for ports and custom names which we use along with the tip command to connect to various ports on the digi switches.&lt;br /&gt;
&lt;br /&gt;
We have 2 digi&#039;s at castle we connect to:&lt;br /&gt;
&lt;br /&gt;
 #3-7 10.1.4.10&lt;br /&gt;
 virt15:dv=/dev/dty/CO001s:br#38400:el=^C^S^Q^U^D:ie=%$:oe=^D:&lt;br /&gt;
 virt13:dv=/dev/dty/CO002s:br#115200:el=^C^S^Q^U^D:ie=%$:oe=^D:hf:&lt;br /&gt;
&lt;br /&gt;
and &lt;br /&gt;
&lt;br /&gt;
 #3-6 10.1.4.11&lt;br /&gt;
 jail4:dv=/dev/dty/CP001s:br#9600:el=^C^S^Q^U^D:ie=%$:oe=^D:hf:&lt;br /&gt;
 jail16:dv=/dev/dty/CP002s:br#9600:el=^C^S^Q^U^D:ie=%$:oe=^D:hf:&lt;br /&gt;
&lt;br /&gt;
The only things you need to edit are the first part (i.e. &amp;lt;tt&amp;gt;jail4&amp;lt;/tt&amp;gt;) and the speed (i.e. &amp;lt;tt&amp;gt;9600&amp;lt;/tt&amp;gt;). You can decipher which port on the digi each line corresponds to by the &amp;lt;tt&amp;gt;CP001s or CO001s&amp;lt;/tt&amp;gt; (port 1 on digi1 and digi2), &amp;lt;tt&amp;gt;CP002s or CO002s&amp;lt;/tt&amp;gt; (port 2 on digi1 and digi2)&lt;br /&gt;
&lt;br /&gt;
=== drpadmin ===&lt;br /&gt;
The tool you use to configure a device to a digi box is drpadmin:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;bash-2.03$ su&lt;br /&gt;
Password:&lt;br /&gt;
# drpadmin&lt;br /&gt;
&lt;br /&gt;
Please select an option (a)dd (d)elete (s)how (r)eset (q)uit : s&lt;br /&gt;
0       10.1.4.10       32      CO      771     never   1027&lt;br /&gt;
1       10.1.4.11       32      CP      771     never   1027&lt;br /&gt;
2       65.116.11.2     8       el      771     never   1027&lt;br /&gt;
&lt;br /&gt;
Please select an option (a)dd (d)elete (s)how (r)eset (q)uit :&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use those commands above to modify the devices available.&lt;br /&gt;
&lt;br /&gt;
== Switching IP/hostname ==&lt;br /&gt;
&lt;br /&gt;
Edit:&lt;br /&gt;
 /etc/defaultrouter&lt;br /&gt;
 /etc/hosts&lt;br /&gt;
 /etc/hostname.hme0&lt;br /&gt;
 /etc/nodename&lt;br /&gt;
 Maybe needed to run: # ifconfig hme0 10.1.4.4 up&lt;br /&gt;
&lt;br /&gt;
= devweb =&lt;br /&gt;
&lt;br /&gt;
We do web development on devweb.johncompanies.com&lt;br /&gt;
&lt;br /&gt;
Currently this is a jail running on jail17 / 69.55.230.8&lt;br /&gt;
&lt;br /&gt;
If the jail is restarted, you will need to manually restart the web service with:&lt;br /&gt;
 httpsdctl restart&lt;br /&gt;
&lt;br /&gt;
All website development work should be done here first. It works exactly like and is setup like our [[Management_System_/_Public_Website_/_Signup_/_Account_Manager|main site]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= firewall2 =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine is the primary firewall for the entire network at i2b.   firewall3 is a hot standby replacement for &lt;br /&gt;
firewall2.  Both firewall2 and firewall3 should not be connected at the same time since they use the same internal&lt;br /&gt;
and external IP addresses.&lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab 6&lt;br /&gt;
* OS: FreeBSD 6.4 x86&lt;br /&gt;
* Networking: Priv IP: 10.1.2.2, Pub IPs: 66.181.18.3 (external), 69.55.229.1 &amp;amp; 69.55.231.1 (internal). It has 3 network connections (2 onboard, 1 PCI) connecting to the external, internal and private networks. TODO: describe NIC location/orientation&lt;br /&gt;
&lt;br /&gt;
* Hardware: 2 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: 73 GB (2 x 73GB) RAID1 array running on an LSI MegaRAID SCSI 320 PCI RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* firewall (ipfw)&lt;br /&gt;
* bigbrother for customer machines&lt;br /&gt;
&lt;br /&gt;
== Firewall Rule Configuration ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Firewall_Rule_Configuration|Firewall Rule Configuration]] for more discussion on how to actually manipulate firewall rules.&lt;br /&gt;
&lt;br /&gt;
== Disaster Recovery ==&lt;br /&gt;
&lt;br /&gt;
TODO: need backup f/w and instructions on how to move cables.&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what you need to put in /etc/rc.conf to get a firewall going (as far as routes and IPs)&lt;br /&gt;
&lt;br /&gt;
TODO&lt;br /&gt;
&lt;br /&gt;
Here&#039;s the config on the live firewall:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;66.181.18.2&amp;quot;&lt;br /&gt;
hostname=&amp;quot;firewall2.johncompanies.com&amp;quot;&lt;br /&gt;
ifconfig_bge0=&amp;quot;inet 66.181.18.3  netmask 255.255.255.224&amp;quot;&lt;br /&gt;
ifconfig_bge1=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias0=&amp;quot;inet 69.55.231.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias1=&amp;quot;inet 65.50.228.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias2=&amp;quot;inet 65.50.229.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias3=&amp;quot;inet 65.50.230.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias4=&amp;quot;inet 65.50.231.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias5=&amp;quot;inet 65.50.232.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias6=&amp;quot;inet 65.50.233.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias7=&amp;quot;inet 65.50.234.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_bge1_alias8=&amp;quot;inet 65.50.235.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
Backup to backup3&lt;br /&gt;
&lt;br /&gt;
 0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
Reset counters and remove pipe rules on the 1st of the month. Pay attention when setting up a rule as 3 4 5 (that&#039;s not a temporary traffic cap). &lt;br /&gt;
&lt;br /&gt;
 59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
 3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
Capture counts periodically&lt;br /&gt;
&lt;br /&gt;
 0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl;&lt;br /&gt;
This script will create &amp;lt;tt&amp;gt;/etc/firewall.sh&amp;lt;/tt&amp;gt; which contains all the firewall and pipe rules in place at the time the script was run.&lt;br /&gt;
&lt;br /&gt;
 */5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
Checking the health of the RAID array&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DOS attacks ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Handling_a_DoS_attack|Handling a DoS attack]] regarding how to handle a DOS attack.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== build ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;partition map:&lt;br /&gt;
/ 58g&lt;br /&gt;
swap 4g&lt;br /&gt;
/var 512m&lt;br /&gt;
/tmp 512m&lt;br /&gt;
/usr 5.5g&lt;br /&gt;
&lt;br /&gt;
4. edit /etc/make.conf &lt;br /&gt;
echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=firewall2 \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
&lt;br /&gt;
5. add settings to /boot/loader.conf and /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;-Dh&amp;quot; &amp;gt;&amp;gt; /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &#039;console=&amp;quot;comconsole,vidconsole&amp;quot; \&lt;br /&gt;
boot_multicons=&amp;quot;YES&amp;quot; \&lt;br /&gt;
boot_serial=&amp;quot;YES&amp;quot; \&lt;br /&gt;
comconsole_speed=&amp;quot;115200&amp;quot;&#039; &amp;gt;&amp;gt; /boot/loader.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. turn off all ttyv&#039;s except 0 and 1 in /etc/ttys&lt;br /&gt;
also turn on ttyd0, change type to vt100:&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&lt;br /&gt;
ttyv2   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv3   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv4   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv5   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv6   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv7   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
# Serial terminals&lt;br /&gt;
# The &#039;dialup&#039; keyword identifies dialin lines to login, fingerd etc.&lt;br /&gt;
ttyd0   &amp;quot;/usr/libexec/getty std.9600&amp;quot;   vt100   on  secure&lt;br /&gt;
&lt;br /&gt;
kill -1 1&lt;br /&gt;
&lt;br /&gt;
on console server:&lt;br /&gt;
vi /etc/remote&lt;br /&gt;
(rename port to jail8 depending on where and which digi plugged into)&lt;br /&gt;
test serial console&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. populate hosts&lt;br /&gt;
echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
&lt;br /&gt;
8. put key in authorized_keys on backup3&lt;br /&gt;
cd&lt;br /&gt;
ssh-keygen -t dsa -b 1024 &lt;br /&gt;
(default location, leave password blank)&lt;br /&gt;
&lt;br /&gt;
Punch a hole in firewall1 to allow traffic to backup servers @ castle:&lt;br /&gt;
&lt;br /&gt;
ipfw add 99 allow ip from 66.181.18.0/27 to 69.55.230.10 22&lt;br /&gt;
ipfw add 95 allow ip from 66.181.18.0/27 to 69.55.230.11 22&lt;br /&gt;
&lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup3 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
confirm that you can ssh to backup3 and backup 2 without getting a login prompt&lt;br /&gt;
&lt;br /&gt;
ssh backup3 hostname&lt;br /&gt;
&lt;br /&gt;
ssh backup2 hostname&lt;br /&gt;
&lt;br /&gt;
ssh backup1 hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit root&#039;s path and login script:&lt;br /&gt;
vi /root/.cshrc&lt;br /&gt;
&lt;br /&gt;
Change alias entries (add G):&lt;br /&gt;
alias la        ls -aG&lt;br /&gt;
alias lf        ls -FAG&lt;br /&gt;
alias ll        ls -lAG&lt;br /&gt;
alias ls        ls -AG&lt;br /&gt;
alias mbm       mb mount&lt;br /&gt;
alias mbu       mb umount&lt;br /&gt;
&lt;br /&gt;
and alter the prompt, set the following:&lt;br /&gt;
set prompt = &amp;quot;`/bin/hostname -s` %/# &amp;quot;&lt;br /&gt;
&lt;br /&gt;
11. install cvsup&lt;br /&gt;
cd /usr/ports/net/cvsup-without-gui &lt;br /&gt;
make install clean; rehash; mail -s &#039;cvs installed&#039; dave.boodman@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
12. get latest sources for this release:&lt;br /&gt;
cd /usr/src &lt;br /&gt;
echo &amp;quot;*default host=cvsup4.freebsd.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_6_4\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
src-all&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup ; mail -s &#039;cvs sup done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
13. configure new kernel. &lt;br /&gt;
&lt;br /&gt;
cd /usr/src/sys/i386/conf &lt;br /&gt;
scp backup2:/mnt/data4/build/freebsd/firewall2-6.4 ./firewall2&lt;br /&gt;
&lt;br /&gt;
15. build, install kernel and world&lt;br /&gt;
&lt;br /&gt;
cd /boot&lt;br /&gt;
&lt;br /&gt;
mv kernel kernel.GENERIC&lt;br /&gt;
cd kernel.GENERIC&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make buildkernel installkernel&lt;br /&gt;
&lt;br /&gt;
make buildworld ; mail -s &#039;buildworld done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&lt;br /&gt;
(2450: 1:56min, supermicro: 59mins, 2950: 38mins)&lt;br /&gt;
make installworld &lt;br /&gt;
(2450: 3min, supermicro: 1min, 2950: :34)&lt;br /&gt;
mergemaster -i&lt;br /&gt;
&lt;br /&gt;
17. populate /etc/rc.conf with IPs and NFS settings&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_bce1=&amp;quot;inet 10.1.2.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;66.181.18.2&amp;quot;&lt;br /&gt;
hostname=&amp;quot;firewall2.johncompanies.com&amp;quot;&lt;br /&gt;
ifconfig_bge0=&amp;quot;inet 66.181.18.3  netmask 255.255.255.224&amp;quot;&lt;br /&gt;
ifconfig_bge1=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.2 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
20. reboot. Confirm new kernel is loaded&lt;br /&gt;
&lt;br /&gt;
uname -a&lt;br /&gt;
&lt;br /&gt;
21. update ports:&lt;br /&gt;
cd /usr/ports&lt;br /&gt;
echo &amp;quot;*default host=cvsup4.FreeBSD.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_6_4\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
ports-all tag=.&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup; mail -s &#039;cvs sup ports done&#039; dave.boodman@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
22. Install raid mgmt tool&lt;br /&gt;
&lt;br /&gt;
# linux base&lt;br /&gt;
cd /usr/ports/devel/libtool22&lt;br /&gt;
make install base&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/emulators/linux_base-fc4&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
#linux-megamgr-5.20&lt;br /&gt;
cd /usr/ports/sysutils/linux-megamgr&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
# megarc-1.51&lt;br /&gt;
cd /usr/ports/sysutils/megarc&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
Test:&lt;br /&gt;
rehash; megarc -ldInfo -a0 -l0&lt;br /&gt;
&lt;br /&gt;
23. install rsync from ports&lt;br /&gt;
cd /usr/ports/net/rsync&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
25. install bb client&lt;br /&gt;
adduser&lt;br /&gt;
Username: bb&lt;br /&gt;
Full name: bb&lt;br /&gt;
Uid (Leave empty for default): 1984&lt;br /&gt;
Login group [bb]:&lt;br /&gt;
Login group is bb. Invite bb into other groups? []:&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]: &lt;br /&gt;
Home directory [/home/bb]:&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: yes&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : bb&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : bb&lt;br /&gt;
Uid        : 1984&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : bb&lt;br /&gt;
Home       : /home/bb&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): yes&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb&lt;br /&gt;
scp backup2:/mnt/data4/build/bb/bb-freebsd.tar .&lt;br /&gt;
tar xvf bb-freebsd.tar&lt;br /&gt;
&lt;br /&gt;
edit /home/bb/bbc1.9e-btf/etc/bb-hosts with something like:&lt;br /&gt;
echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
10.1.2.1 firewall2.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
vi /home/bb/bbc1.9e-btf/ext/openfiles &lt;br /&gt;
MACHINE=&amp;quot;firewall2,johncompanies,com&amp;quot;      # HAS TO BE IN A,B,C FORM&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb/bbc1.9e-btf/etc&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&lt;br /&gt;
make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
&lt;br /&gt;
vi /usr/home/bb/bbc1.9e-btf/bin/bb-cpu.sh&lt;br /&gt;
        $1 $TOPARGS &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
#        /usr/local/jail/bin/jtop &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
echo &#039;su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&#039; &amp;gt; /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
chmod +x /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
&lt;br /&gt;
Punch a hole in the firewall to allow it to communicate with bb monitor:&lt;br /&gt;
&lt;br /&gt;
ipfw add 00096 allow ip from 66.181.18.0/27 to 69.55.230.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
27. configure bb on mail:&lt;br /&gt;
vi /usr/home/bb/bbsrc/bb1.9e-btf/etc/bb-hosts&lt;br /&gt;
66.181.18.3 firewall2.johncompanies.com # ssh&lt;br /&gt;
&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
bbsrc/bb/runbb.sh restart ; exit&lt;br /&gt;
&lt;br /&gt;
29. configure ntp&lt;br /&gt;
echo &amp;quot;server 10.1.2.1&amp;quot; &amp;gt; /etc/ntp.conf&lt;br /&gt;
&lt;br /&gt;
/usr/sbin/ntpd -p /var/run/ntpd.pid&lt;br /&gt;
sleep 2; ntpq -p&lt;br /&gt;
(confirm it’s able to reach our time server)&lt;br /&gt;
&lt;br /&gt;
echo &#039;/usr/sbin/ntpd -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift&#039; &amp;gt; /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
&lt;br /&gt;
30. fwd and reverse lookups on ns1c&lt;br /&gt;
vr johncompanies.com&lt;br /&gt;
 (edit the PTR too)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
33. setup backups&lt;br /&gt;
echo &#039;#\!/bin/sh\&lt;br /&gt;
backupdir=/data/firewall2/current\&lt;br /&gt;
\&lt;br /&gt;
## ENTRY /etc &#039; &amp;gt; /usr/local/etc/backup.config&lt;br /&gt;
&lt;br /&gt;
on backup3:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
ssh backup3 mkdir -p /data/firewall2/current&lt;br /&gt;
&lt;br /&gt;
on backup3, add the system to &lt;br /&gt;
vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
scp 69.55.230.2:/usr/local/etc/rsync.backup /usr/local/etc/rsync.backup &lt;br /&gt;
vi /usr/local/etc/rsync.backup &lt;br /&gt;
backup1 &amp;gt; backup3&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
34. mkdir /root/logs&lt;br /&gt;
&lt;br /&gt;
35. edit sshd_config for security&lt;br /&gt;
vi /etc/ssh/sshd_config&lt;br /&gt;
ListenAddress 66.181.18.3&lt;br /&gt;
ListenAddress 10.1.2.1&lt;br /&gt;
&lt;br /&gt;
kill -1 `cat /var/run/sshd.pid`&lt;br /&gt;
&lt;br /&gt;
35. raid chk&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/sbin/lsiraidchk&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
my @out = split &amp;quot;\n&amp;quot;, `megarc -ldInfo -a0 -Lall|grep Status:`;&lt;br /&gt;
&lt;br /&gt;
foreach (@out) {&lt;br /&gt;
    if ($_ =~ /DEGRADED/) { $date = `date`; chomp $date; `echo &amp;quot;$date: RAID ARRAY DEGRADED&amp;quot; &amp;gt;&amp;gt; /var/log/messages`; }&lt;br /&gt;
#print $_;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
36. add crontab entries&lt;br /&gt;
crontab -e&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl; &lt;br /&gt;
*/5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
&lt;br /&gt;
#10 0 * * * rm /var/spool/clientmqueue/*&lt;br /&gt;
&lt;br /&gt;
scp /etc/makefwrules.pl user@64.163.14.48:~&lt;br /&gt;
scp /etc/makepiperules.pl user@64.163.14.48:~&lt;br /&gt;
mv /home/user/makefwrules.pl /etc&lt;br /&gt;
mv /home/user/makepiperules.pl /etc&lt;br /&gt;
touch /etc/firewall.sh&lt;br /&gt;
mkdir /etc/oldrules/&lt;br /&gt;
&lt;br /&gt;
other binaries&lt;br /&gt;
	&lt;br /&gt;
scp /usr/local/bin/rulemaker user@64.163.14.48:~&lt;br /&gt;
mv ~user/rulemaker /usr/local/sbin&lt;br /&gt;
scp ~user/Sendmail.pm user@64.163.14.48:~&lt;br /&gt;
scp ~user/doswatch.pl user@64.163.14.48:~&lt;br /&gt;
&lt;br /&gt;
Setup basic ruleset&lt;br /&gt;
&lt;br /&gt;
ipfw add 00009 count udp from any to any&lt;br /&gt;
ipfw add 00010 allow tcp from any to any established&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags syn tcpoptions !mss&lt;br /&gt;
ipfw add 00012 deny icmp from any to any icmptypes 4,5,9,10,12,13,14,15,16,17,18&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags syn,fin&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags fin,psh,rst,urg&lt;br /&gt;
ipfw add 00012 allow icmp from any to any&lt;br /&gt;
ipfw add 00014 deny tcp from any to any dst-port 135&lt;br /&gt;
ipfw add 00150 skipto 65535 ip from any to any via em1 in&lt;br /&gt;
&lt;br /&gt;
IPKVM3:&lt;br /&gt;
00098 allow ip from { 69.55.230.6 or 69.55.230.7 } to 69.55.230.10 dst-port 139&lt;br /&gt;
00098 deny ip from any to 69.55.230.10 dst-port 139&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= firewall3 =&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
&lt;br /&gt;
This machine is the backup firewall for the network at i2b. &lt;br /&gt;
&lt;br /&gt;
* Location: i2b, cab ?&lt;br /&gt;
* OS: FreeBSD 9.1 amd64&lt;br /&gt;
* Networking: Priv IP: 10.1.2.5, Pub IPs: 66.181.18.3 (external), 69.55.229.1 &amp;amp; 69.55.231.1 (internal). It has 3 network connections (2 onboard, 1 PCI) connecting to the external, internal and private networks. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The internal network NIC is the left one on the motherboard (69.55.229.1/24, ...).&lt;br /&gt;
The external network NIC is the right one on the motherboard (66.181.18.3/28).&lt;br /&gt;
The PCI ethernet card is connected to our private network (10.1.2.5/24).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Hardware: 2 SCSI SCA drive bays (2 columns of 3, drive 0 top left, drive 1 just below) all hot-swap. Dual power supply. &lt;br /&gt;
* Drives: 160 GB (2 x 160GB) RAID1 array running on an LSI MegaRAID SCSI 320 PCI RAID card.&lt;br /&gt;
&lt;br /&gt;
== Services Provided ==&lt;br /&gt;
* firewall (ipfw)&lt;br /&gt;
* bigbrother&lt;br /&gt;
&lt;br /&gt;
== Firewall Rule Configuration ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Firewall_Rule_Configuration|Firewall Rule Configuration]] for more discussion on how to actually manipulate firewall rules.&lt;br /&gt;
&lt;br /&gt;
== Disaster Recovery ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To put the backup firewall3 into service:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Move the internal cable (to our networks) from firewall2 to em1 which is the left most ethernet port (69.55.229.1).&lt;br /&gt;
Move the external cable (to outside world) from firewall2 to em0 which is the port to the right on the motherboard (66.181.18.3).&lt;br /&gt;
The PCI ethernet port (fxp0) should already be connected to private network (10.1.2.5).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what you need to put in /etc/rc.conf to get a firewall going (as far as routes and IPs)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;66.181.18.2&amp;quot;&lt;br /&gt;
hostname=&amp;quot;firewall3.johncompanies.com&amp;quot;&lt;br /&gt;
ifconfig_em0=&amp;quot;inet 66.181.18.3  netmask 255.255.255.224&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_em1=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias0=&amp;quot;inet 69.55.231.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# ifconfig_em1_alias1=&amp;quot;inet 65.50.228.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias2=&amp;quot;inet 65.50.229.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias3=&amp;quot;inet 65.50.230.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias4=&amp;quot;inet 65.50.231.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias5=&amp;quot;inet 65.50.232.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias6=&amp;quot;inet 65.50.233.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias7=&amp;quot;inet 65.50.234.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
# ifconfig_em1_alias8=&amp;quot;inet 65.50.235.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.5 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cronjobs ==&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
Backup to backup3&lt;br /&gt;
&lt;br /&gt;
 0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
Reset counters and remove pipe rules on the 1st of the month. Pay attention when setting up a rule as 3 4 5 (that&#039;s not a temporary traffic cap). &lt;br /&gt;
&lt;br /&gt;
 59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
 3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
Capture counts periodically&lt;br /&gt;
&lt;br /&gt;
 0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl;&lt;br /&gt;
This script will create &amp;lt;tt&amp;gt;/etc/firewall.sh&amp;lt;/tt&amp;gt; which contains all the firewall and pipe rules in place at the time the script was run.&lt;br /&gt;
&lt;br /&gt;
 */5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
Checking the health of the RAID array&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DOS attacks ==&lt;br /&gt;
&lt;br /&gt;
See [[FreeBSD_Reference#Handling_a_DoS_attack|Handling a DoS attack]] regarding how to handle a DOS attack.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== build ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;partition map:&lt;br /&gt;
/ 58g&lt;br /&gt;
swap 4g&lt;br /&gt;
/var 512m&lt;br /&gt;
/tmp 512m&lt;br /&gt;
/usr 5.5g&lt;br /&gt;
&lt;br /&gt;
4. edit /etc/make.conf &lt;br /&gt;
echo &amp;quot;WITHOUT_X11=yes \&lt;br /&gt;
KERNCONF=firewall3 \&lt;br /&gt;
BOOT_COMCONSOLE_SPEED=115200&amp;quot; &amp;gt;&amp;gt; /etc/make.conf&lt;br /&gt;
&lt;br /&gt;
5. add settings to /boot/loader.conf and /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;-Dh&amp;quot; &amp;gt;&amp;gt; /boot.config&lt;br /&gt;
&lt;br /&gt;
echo &#039;console=&amp;quot;comconsole,vidconsole&amp;quot; \&lt;br /&gt;
boot_multicons=&amp;quot;YES&amp;quot; \&lt;br /&gt;
boot_serial=&amp;quot;YES&amp;quot; \&lt;br /&gt;
comconsole_speed=&amp;quot;115200&amp;quot;&#039; &amp;gt;&amp;gt; /boot/loader.conf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6. turn off all ttyv&#039;s except 0 and 1 in /etc/ttys&lt;br /&gt;
also turn on ttyd0, change type to vt100:&lt;br /&gt;
vi /etc/ttys&lt;br /&gt;
&lt;br /&gt;
ttyv2   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv3   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv4   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv5   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv6   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
ttyv7   &amp;quot;/usr/libexec/getty Pc&amp;quot;         cons25  off secure&lt;br /&gt;
# Serial terminals&lt;br /&gt;
# The &#039;dialup&#039; keyword identifies dialin lines to login, fingerd etc.&lt;br /&gt;
ttyd0   &amp;quot;/usr/libexec/getty std.9600&amp;quot;   vt100   on  secure&lt;br /&gt;
&lt;br /&gt;
kill -1 1&lt;br /&gt;
&lt;br /&gt;
on console server:&lt;br /&gt;
vi /etc/remote&lt;br /&gt;
(rename port to jail8 depending on where and which digi plugged into)&lt;br /&gt;
test serial console&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7. populate hosts&lt;br /&gt;
echo &amp;quot;69.55.230.10 backup2&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;69.55.230.11 backup1&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
echo &amp;quot;10.1.2.3 backup3&amp;quot; &amp;gt;&amp;gt; /etc/hosts&lt;br /&gt;
&lt;br /&gt;
8. put key in authorized_keys on backup3&lt;br /&gt;
cd&lt;br /&gt;
ssh-keygen -t dsa -b 1024 &lt;br /&gt;
(default location, leave password blank)&lt;br /&gt;
&lt;br /&gt;
Punch a hole in firewall1 to allow traffic to backup servers @ castle:&lt;br /&gt;
&lt;br /&gt;
ipfw add 99 allow ip from 66.181.18.0/27 to 69.55.230.10 22&lt;br /&gt;
ipfw add 95 allow ip from 66.181.18.0/27 to 69.55.230.11 22&lt;br /&gt;
&lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup3 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup2 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
cat /root/.ssh/id_dsa.pub | ssh backup1 &#039;cat - &amp;gt;&amp;gt; /root/.ssh/authorized_keys&#039; &lt;br /&gt;
&lt;br /&gt;
confirm that you can ssh to backup3 and backup 2 without getting a login prompt&lt;br /&gt;
&lt;br /&gt;
ssh backup3 hostname&lt;br /&gt;
&lt;br /&gt;
ssh backup2 hostname&lt;br /&gt;
&lt;br /&gt;
ssh backup1 hostname&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
10. edit root&#039;s path and login script:&lt;br /&gt;
vi /root/.cshrc&lt;br /&gt;
&lt;br /&gt;
Change alias entries (add G):&lt;br /&gt;
alias la        ls -aG&lt;br /&gt;
alias lf        ls -FAG&lt;br /&gt;
alias ll        ls -lAG&lt;br /&gt;
alias ls        ls -AG&lt;br /&gt;
alias mbm       mb mount&lt;br /&gt;
alias mbu       mb umount&lt;br /&gt;
&lt;br /&gt;
and alter the prompt, set the following:&lt;br /&gt;
set prompt = &amp;quot;`/bin/hostname -s` %/# &amp;quot;&lt;br /&gt;
&lt;br /&gt;
11. install cvsup&lt;br /&gt;
cd /usr/ports/net/cvsup-without-gui &lt;br /&gt;
make install clean; rehash; mail -s &#039;cvs installed&#039; 8583619553@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
12. get latest sources for this release:&lt;br /&gt;
cd /usr/src &lt;br /&gt;
echo &amp;quot;*default host=cvsup4.freebsd.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_9_1\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
src-all&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup ; mail -s &#039;cvs sup done&#039; 8583619553@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
13. configure new kernel. &lt;br /&gt;
&lt;br /&gt;
cd /usr/src/sys/amd64/conf &lt;br /&gt;
scp backup2:/mnt/data4/build/freebsd/firewall3-9.1 ./firewall3&lt;br /&gt;
&lt;br /&gt;
15. build, install kernel and world&lt;br /&gt;
&lt;br /&gt;
cd /boot&lt;br /&gt;
&lt;br /&gt;
mv kernel kernel.GENERIC&lt;br /&gt;
cd kernel.GENERIC&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make buildkernel installkernel&lt;br /&gt;
&lt;br /&gt;
make buildworld ; mail -s &#039;buildworld done&#039; 8583619553@vtext.com &amp;lt; /dev/null&lt;br /&gt;
(supermicro: 2:15 mins, 2950: 38? mins)&lt;br /&gt;
make installworld &lt;br /&gt;
(2450: 3min, supermicro: 1min, 2950: :34)&lt;br /&gt;
mergemaster -i&lt;br /&gt;
&lt;br /&gt;
17. populate /etc/rc.conf with IPs and NFS settings&lt;br /&gt;
vi /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
portmap_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
sendmail_enable=&amp;quot;NO&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
gateway_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
xntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_client_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
nfs_reserved_port_only=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
fsck_y_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
background_fsck=&amp;quot;NO&amp;quot;&lt;br /&gt;
&lt;br /&gt;
hostname=&amp;quot;firewall3.johncompanies.com&amp;quot;&lt;br /&gt;
#  external network&lt;br /&gt;
ifconfig_em0=&amp;quot;inet 66.181.18.3  netmask 255.255.255.224&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#  internal network&lt;br /&gt;
ifconfig_em1=&amp;quot;inet 69.55.229.1 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias0=&amp;quot;inet 69.55.231.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
ifconfig_em1_alias1=&amp;quot;inet 65.50.228.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias2=&amp;quot;inet 65.50.229.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias3=&amp;quot;inet 65.50.230.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias4=&amp;quot;inet 65.50.231.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias5=&amp;quot;inet 65.50.232.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias6=&amp;quot;inet 65.50.233.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias7=&amp;quot;inet 65.50.234.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
ifconfig_em1_alias8=&amp;quot;inet 65.50.235.1  netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
defaultrouter=&amp;quot;66.181.18.2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
#  private network&lt;br /&gt;
ifconfig_fxp0=&amp;quot;inet 10.1.2.5 netmask 255.255.255.0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
inetd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
inetd_flags=&amp;quot;-wW -a 10.1.2.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
usbd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
ntpd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
# powerd_enable=&amp;quot;YES&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
20. reboot. Confirm new kernel is loaded&lt;br /&gt;
&lt;br /&gt;
uname -a&lt;br /&gt;
&lt;br /&gt;
21. update ports:&lt;br /&gt;
cd /usr/ports&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;*default host=cvsup4.FreeBSD.org\&lt;br /&gt;
*default base=/usr\&lt;br /&gt;
*default prefix=/usr\&lt;br /&gt;
*default release=cvs tag=RELENG_9_1\&lt;br /&gt;
*default delete use-rel-suffix\&lt;br /&gt;
*default compress\&lt;br /&gt;
ports-all tag=.&amp;quot; &amp;gt; sup&lt;br /&gt;
&lt;br /&gt;
cvsup sup; mail -s &#039;cvs sup ports done&#039; 8583619553@vtext.com &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
22. Install raid mgmt tool&lt;br /&gt;
&lt;br /&gt;
# linux base&lt;br /&gt;
cd /usr/ports/devel/libtool22&lt;br /&gt;
make install base&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/emulators/linux_base-fc4&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
scp backup2:/d4/build/3ware/tw_cli-freebsd-x86_64-9.5.0.1.tgz /usr/local/sbin&lt;br /&gt;
cd /usr/local/sbin &lt;br /&gt;
tar xzvf tw_cli-freebsd-x86_64-9.5.0.1.tgz&lt;br /&gt;
rm tw_cli-freebsd-x86_64-9.5.0.1.tgz&lt;br /&gt;
&lt;br /&gt;
23. install rsync from ports&lt;br /&gt;
cd /usr/ports/net/rsync&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
choose default options&lt;br /&gt;
&lt;br /&gt;
25. install bb client&lt;br /&gt;
adduser&lt;br /&gt;
Username: bb&lt;br /&gt;
Full name: bb&lt;br /&gt;
Uid (Leave empty for default): 1984&lt;br /&gt;
Login group [bb]:&lt;br /&gt;
Login group is bb. Invite bb into other groups? []:&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]: &lt;br /&gt;
Home directory [/home/bb]:&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: yes&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : bb&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : bb&lt;br /&gt;
Uid        : 1984&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : bb&lt;br /&gt;
Home       : /home/bb&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): yes&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb&lt;br /&gt;
scp backup2:/mnt/data4/build/bb/bb-freebsd.tar .&lt;br /&gt;
tar xvf bb-freebsd.tar&lt;br /&gt;
&lt;br /&gt;
edit /home/bb/bbc1.9e-btf/etc/bb-hosts with something like:&lt;br /&gt;
echo &amp;quot;69.55.230.2 mail.johncompanies.com # BBPAGER BBNET BBDISPLAY smtp ssh \&lt;br /&gt;
10.1.2.5 firewall3.johncompanies.com # ssh&amp;quot; &amp;gt; /home/bb/bbc1.9e-btf/etc/bb-hosts&lt;br /&gt;
&lt;br /&gt;
vi /home/bb/bbc1.9e-btf/ext/openfiles &lt;br /&gt;
MACHINE=&amp;quot;firewall3,johncompanies,com&amp;quot;      # HAS TO BE IN A,B,C FORM&lt;br /&gt;
&lt;br /&gt;
cd /usr/home/bb/bbc1.9e-btf/etc&lt;br /&gt;
./bbchkcfg.sh &lt;br /&gt;
(y to questions)&lt;br /&gt;
./bbchkhosts.sh&lt;br /&gt;
(ignore ssh errors)&lt;br /&gt;
cd ../..&lt;br /&gt;
chown -R bb .&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
cd bbc1.9e-btf/src&lt;br /&gt;
make; make install&lt;br /&gt;
cd ..&lt;br /&gt;
&lt;br /&gt;
vi /usr/home/bb/bbc1.9e-btf/bin/bb-cpu.sh&lt;br /&gt;
        $1 $TOPARGS &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
#        /usr/local/jail/bin/jtop &amp;gt; $BBTMP/TOP.$$&lt;br /&gt;
&lt;br /&gt;
./runbb.sh start&lt;br /&gt;
more BBOUT &lt;br /&gt;
(look for errors)&lt;br /&gt;
exit&lt;br /&gt;
&lt;br /&gt;
echo &#039;su - bb -c &amp;quot;cd /home/bb/bbc1.9e-btf; ./runbb.sh start&amp;quot;&#039; &amp;gt; /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
chmod +x /usr/local/etc/rc.d/bb.sh&lt;br /&gt;
&lt;br /&gt;
Punch a hole in the firewall to allow it to communicate with bb monitor:&lt;br /&gt;
&lt;br /&gt;
ipfw add 00096 allow ip from 66.181.18.0/27 to 69.55.230.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
27. configure bb on mail:&lt;br /&gt;
vi /usr/home/bb/bbsrc/bb1.9e-btf/etc/bb-hosts&lt;br /&gt;
66.181.18.3 firewall3.johncompanies.com # ssh&lt;br /&gt;
&lt;br /&gt;
su bb&lt;br /&gt;
cd&lt;br /&gt;
bbsrc/bb/runbb.sh restart ; exit&lt;br /&gt;
&lt;br /&gt;
29. configure ntp&lt;br /&gt;
echo &amp;quot;server 10.1.2.1&amp;quot; &amp;gt; /etc/ntp.conf&lt;br /&gt;
&lt;br /&gt;
/usr/sbin/ntpd -p /var/run/ntpd.pid&lt;br /&gt;
sleep 2; ntpq -p&lt;br /&gt;
(confirm it’s able to reach our time server)&lt;br /&gt;
&lt;br /&gt;
echo &#039;/usr/sbin/ntpd -c /etc/ntp.conf -p /var/run/ntpd.pid -f /var/db/ntpd.drift&#039; &amp;gt; /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/ntp.sh&lt;br /&gt;
&lt;br /&gt;
30. fwd and reverse lookups on ns1c&lt;br /&gt;
vr johncompanies.com&lt;br /&gt;
 (edit the PTR too)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
33. setup backups&lt;br /&gt;
echo &#039;#\!/bin/sh\&lt;br /&gt;
backupdir=/data/firewall2/current\&lt;br /&gt;
\&lt;br /&gt;
## ENTRY /etc &#039; &amp;gt; /usr/local/etc/backup.config&lt;br /&gt;
&lt;br /&gt;
on backup3:&lt;br /&gt;
setup backup dirs:&lt;br /&gt;
ssh backup3 mkdir -p /data/firewall2/current&lt;br /&gt;
&lt;br /&gt;
on backup3, add the system to &lt;br /&gt;
vi /usr/local/sbin/snapshot_archive&lt;br /&gt;
&lt;br /&gt;
scp 69.55.230.2:/usr/local/etc/rsync.backup /usr/local/etc/rsync.backup &lt;br /&gt;
vi /usr/local/etc/rsync.backup &lt;br /&gt;
backup1 &amp;gt; backup3&lt;br /&gt;
&lt;br /&gt;
crontab -e&lt;br /&gt;
1 0 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
34. mkdir /root/logs&lt;br /&gt;
&lt;br /&gt;
35. edit sshd_config for security&lt;br /&gt;
vi /etc/ssh/sshd_config&lt;br /&gt;
ListenAddress 66.181.18.3&lt;br /&gt;
ListenAddress 10.1.2.5&lt;br /&gt;
&lt;br /&gt;
kill -1 `cat /var/run/sshd.pid`&lt;br /&gt;
&lt;br /&gt;
35. raid chk&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/sbin/lsiraidchk&lt;br /&gt;
#!/usr/bin/perl&lt;br /&gt;
&lt;br /&gt;
my @out = split &amp;quot;\n&amp;quot;, `megarc -ldInfo -a0 -Lall|grep Status:`;&lt;br /&gt;
&lt;br /&gt;
foreach (@out) {&lt;br /&gt;
    if ($_ =~ /DEGRADED/) { $date = `date`; chomp $date; `echo &amp;quot;$date: RAID ARRAY DEGRADED&amp;quot; &amp;gt;&amp;gt; /var/log/messages`; }&lt;br /&gt;
#print $_;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
36. add crontab entries&lt;br /&gt;
crontab -e&lt;br /&gt;
30 3 * * * /usr/local/etc/rsync.backup&lt;br /&gt;
0 0 1 * * /sbin/ipfw zero&lt;br /&gt;
0 0 1 * * /sbin/ipfw del 3 4 5&lt;br /&gt;
59 23 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
3 0 30 * * /sbin/ipfw show &amp;gt; /tmp/ipfw_count&lt;br /&gt;
0 3 * * * /etc/makefwrules.pl; /etc/makepiperules.pl; &lt;br /&gt;
*/5 * * * * /usr/local/sbin/lsiraidchk&lt;br /&gt;
&lt;br /&gt;
#10 0 * * * rm /var/spool/clientmqueue/*&lt;br /&gt;
&lt;br /&gt;
scp /etc/makefwrules.pl user@64.163.14.48:~&lt;br /&gt;
scp /etc/makepiperules.pl user@64.163.14.48:~&lt;br /&gt;
mv /home/user/makefwrules.pl /etc&lt;br /&gt;
mv /home/user/makepiperules.pl /etc&lt;br /&gt;
touch /etc/firewall.sh&lt;br /&gt;
mkdir /etc/oldrules/&lt;br /&gt;
&lt;br /&gt;
other binaries&lt;br /&gt;
	&lt;br /&gt;
scp /usr/local/bin/rulemaker user@64.163.14.48:~&lt;br /&gt;
mv ~user/rulemaker /usr/local/sbin&lt;br /&gt;
scp ~user/Sendmail.pm user@64.163.14.48:~&lt;br /&gt;
scp ~user/doswatch.pl user@64.163.14.48:~&lt;br /&gt;
&lt;br /&gt;
Setup basic ruleset&lt;br /&gt;
&lt;br /&gt;
ipfw add 00009 count udp from any to any&lt;br /&gt;
ipfw add 00010 allow tcp from any to any established&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags syn tcpoptions !mss&lt;br /&gt;
ipfw add 00012 deny icmp from any to any icmptypes 4,5,9,10,12,13,14,15,16,17,18&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags syn,fin&lt;br /&gt;
ipfw add 00012 deny tcp from any to any tcpflags fin,psh,rst,urg&lt;br /&gt;
ipfw add 00012 allow icmp from any to any&lt;br /&gt;
ipfw add 00014 deny tcp from any to any dst-port 135&lt;br /&gt;
ipfw add 00150 skipto 65535 ip from any to any via em1 in&lt;br /&gt;
&lt;br /&gt;
IPKVM3:&lt;br /&gt;
00098 allow ip from { 69.55.230.6 or 69.55.230.7 } to 69.55.230.10 dst-port 139&lt;br /&gt;
00098 deny ip from any to 69.55.230.10 dst-port 139&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= wiki =&lt;br /&gt;
&lt;br /&gt;
The wiki (mediawiki) runs on nat2 in a jail running off 69.55.229.8&lt;br /&gt;
&lt;br /&gt;
The backup wiki lives on virt13 in CT 5 / 69.55.230.18&lt;br /&gt;
&lt;br /&gt;
== Setup jail ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /mnt/data1/wiki-dir&lt;br /&gt;
cd /usr/src&lt;br /&gt;
make installworld DESTDIR=/mnt/data1/wiki-dir&lt;br /&gt;
cd etc&lt;br /&gt;
make distribution DESTDIR=/mnt/data1/wiki-dir&lt;br /&gt;
&lt;br /&gt;
mount -t devfs devfs /mnt/data1/wiki-dir/dev&lt;br /&gt;
devfs -m /mnt/data1/wiki-dir/dev rule -s 3 applyset &lt;br /&gt;
&lt;br /&gt;
cd /mnt/data1/wiki-dir&lt;br /&gt;
&lt;br /&gt;
ln -sf dev/null kernel&lt;br /&gt;
&lt;br /&gt;
scp jail9:/usr/local/sbin/jkill /mnt/data1/wiki-dir/sbin&lt;br /&gt;
&lt;br /&gt;
jail /mnt/data1/wiki-dir wiki.johncompanies.com 69.55.229.8 /bin/sh&lt;br /&gt;
csh&lt;br /&gt;
&lt;br /&gt;
touch /etc/fstab&lt;br /&gt;
echo &#039;network_interfaces=&amp;quot;&amp;quot;\&lt;br /&gt;
hostname=&amp;quot;wiki.johncompanies.com&amp;quot;\&lt;br /&gt;
kern_securelevel_enable=&amp;quot;NO&amp;quot;\&lt;br /&gt;
sendmail_enable=&amp;quot;YES&amp;quot;\&lt;br /&gt;
sshd_enable=&amp;quot;YES&amp;quot;&#039; &amp;gt; /etc/rc.conf&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;nameserver 69.55.229.3\&lt;br /&gt;
nameserver 69.55.225.225&amp;quot; &amp;gt;&amp;gt; /etc/resolv.conf &lt;br /&gt;
&lt;br /&gt;
vi /etc/crontab&lt;br /&gt;
 &lt;br /&gt;
(remove the adjkerntz lines )&lt;br /&gt;
 &lt;br /&gt;
vi /etc/periodic/security/100.chksetuid&lt;br /&gt;
 &lt;br /&gt;
replace: MP=`mount -t ufs | grep -v &amp;quot; nosuid&amp;quot; | awk &#039;{ print $3 }&#039; | sort`&lt;br /&gt;
 with: MP=&#039;/&#039; (use single quotes) &lt;br /&gt;
&lt;br /&gt;
mkdir -p /usr/compat/linux/dev&lt;br /&gt;
 &lt;br /&gt;
adduser&lt;br /&gt;
&lt;br /&gt;
Username: user&lt;br /&gt;
Full name: user&lt;br /&gt;
Uid (Leave empty for default):&lt;br /&gt;
Login group [user]:&lt;br /&gt;
Login group is user. Invite user into other groups? []: wheel&lt;br /&gt;
Login class [default]:&lt;br /&gt;
Shell (sh csh tcsh nologin) [sh]:&lt;br /&gt;
Home directory [/home/user]:&lt;br /&gt;
Home directory permissions (Leave empty for default):&lt;br /&gt;
Use password-based authentication? [yes]:&lt;br /&gt;
Use an empty password? (yes/no) [no]:&lt;br /&gt;
Use a random password? (yes/no) [no]: y&lt;br /&gt;
Lock out the account after creation? [no]:&lt;br /&gt;
Username   : user&lt;br /&gt;
Password   : &amp;lt;random&amp;gt;&lt;br /&gt;
Full Name  : user&lt;br /&gt;
Uid        : 1001&lt;br /&gt;
Class      :&lt;br /&gt;
Groups     : user&lt;br /&gt;
Home       : /home/user&lt;br /&gt;
Home Mode  :&lt;br /&gt;
Shell      : /bin/sh&lt;br /&gt;
Locked     : no&lt;br /&gt;
OK? (yes/no): y&lt;br /&gt;
adduser: INFO: Successfully added (user) to the user database.&lt;br /&gt;
adduser: INFO: Password for (user) is: 901gmYjO&lt;br /&gt;
Add another user? (yes/no): n&lt;br /&gt;
Goodbye! &lt;br /&gt;
&lt;br /&gt;
vi /usr/home/user/.profile &lt;br /&gt;
TERM=vt100;     export TERM&lt;br /&gt;
 &lt;br /&gt;
tzsetup&lt;br /&gt;
 &lt;br /&gt;
newaliases &lt;br /&gt;
 &lt;br /&gt;
rm /sbin/halt /sbin/reboot&lt;br /&gt;
ln /sbin/jkill /sbin/halt&lt;br /&gt;
ln /sbin/jkill /sbin/reboot&lt;br /&gt;
 &lt;br /&gt;
vi /etc/syslog.conf&lt;br /&gt;
#*.err;kern.warning;auth.notice;mail.crit               /dev/console&lt;br /&gt;
*.err;kern.warning;auth.notice;mail.crit                /var/log/messages &lt;br /&gt;
&lt;br /&gt;
exit&lt;br /&gt;
exit&lt;br /&gt;
 &lt;br /&gt;
cd libexec&lt;br /&gt;
chflags noschg ld-elf32.so.1&lt;br /&gt;
chflags noschg ld-elf.so.1&lt;br /&gt;
mv ld-elf32.so.1 ld-elf32.so.1-orig&lt;br /&gt;
ln ld-elf.so.1 ld-elf32.so.1&lt;br /&gt;
chflags schg ld-elf.so.1&lt;br /&gt;
chflags schg ld-elf32.so.1&lt;br /&gt;
 &lt;br /&gt;
cp -r /usr/ports /mnt/data1/wiki-dir/usr &lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/etc/rc.d/wiki.sh&lt;br /&gt;
mount -t devfs devfs /mnt/data1/wiki-dir/dev/&lt;br /&gt;
devfs -m /mnt/data1/wiki-dir/dev rule -s 3 applyset&lt;br /&gt;
jail /mnt/data1/wiki-dir wiki.johncompanies.com 69.55.229.8 /bin/sh /etc/rc&lt;br /&gt;
&lt;br /&gt;
chmod 0700 /usr/local/etc/rc.d/wiki.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mediawiki setup ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/net/rsync&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/distfiles/&lt;br /&gt;
fetch http://downloads.mysql.com/archives/mysql-5.5/mysql-5.5.4-m3.tar.gz&lt;br /&gt;
cd /usr/ports/databases/mysql55-server&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/distfiles/&lt;br /&gt;
fetch http://downloads.php.net/johannes/php-5.3.2.tar.bz2&lt;br /&gt;
cd /usr/ports/lang/php52&lt;br /&gt;
make install clean&lt;br /&gt;
(build apache module)&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/lang/php5-extensions&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
cd /usr/ports/www/apache22&lt;br /&gt;
make install clean&lt;br /&gt;
&lt;br /&gt;
cd /usr/local/www/&lt;br /&gt;
fetch http://download.wikimedia.org/mediawiki/1.19/mediawiki-1.19.2.tar.gz&lt;br /&gt;
tar xzf mediawiki-1.19.2.tar.gz&lt;br /&gt;
mv mediawiki-1.19.2 wiki&lt;br /&gt;
&lt;br /&gt;
vi /usr/local/etc/apache22/httpd.conf&lt;br /&gt;
DocumentRoot &amp;quot;/usr/local/www/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Include etc/apache22/extra/vhost-wiki.conf&lt;br /&gt;
Listen 443&lt;br /&gt;
&lt;br /&gt;
&amp;lt;IfModule mod_php5.c&amp;gt;&lt;br /&gt;
    &amp;lt;FilesMatch &amp;quot;\.ph(p3?|tml)$&amp;quot;&amp;gt;&lt;br /&gt;
        SetHandler application/x-httpd-php&lt;br /&gt;
    &amp;lt;/FilesMatch&amp;gt;&lt;br /&gt;
    &amp;lt;FilesMatch &amp;quot;\.phps$&amp;quot;&amp;gt;&lt;br /&gt;
        SetHandler application/x-httpd-php-source&lt;br /&gt;
    &amp;lt;/FilesMatch&amp;gt;&lt;br /&gt;
    # To re-enable php in user directories comment the following lines&lt;br /&gt;
    # (from &amp;lt;IfModule ...&amp;gt; to &amp;lt;/IfModule&amp;gt;.) Do NOT set it to On as it&lt;br /&gt;
    # prevents .htaccess files from disabling it.&lt;br /&gt;
    &amp;lt;IfModule mod_userdir.c&amp;gt;&lt;br /&gt;
        &amp;lt;Directory /home/*/public_html&amp;gt;&lt;br /&gt;
            php_admin_value engine Off&lt;br /&gt;
        &amp;lt;/Directory&amp;gt;&lt;br /&gt;
    &amp;lt;/IfModule&amp;gt;&lt;br /&gt;
&amp;lt;/IfModule&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/etc/apache22/extra/vhost-wiki.conf &lt;br /&gt;
&amp;lt;VirtualHost *:443&amp;gt;&lt;br /&gt;
        ServerAdmin support@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
        DocumentRoot /usr/local/www/wiki&lt;br /&gt;
#        &amp;lt;Directory /&amp;gt;&lt;br /&gt;
#                Options FollowSymLinks&lt;br /&gt;
#                AllowOverride None&lt;br /&gt;
#                Order deny,allow&lt;br /&gt;
#        &amp;lt;/Directory&amp;gt;&lt;br /&gt;
        &amp;lt;Directory /usr/local/www/wiki&amp;gt;&lt;br /&gt;
                Options Indexes FollowSymLinks MultiViews&lt;br /&gt;
                Deny from all&lt;br /&gt;
                AllowOverride AuthConfig&lt;br /&gt;
                Order allow,deny&lt;br /&gt;
                DirectoryIndex index.php&lt;br /&gt;
                #Allow from 69.55.233.195&lt;br /&gt;
                #Allow from boody.dyndns.org&lt;br /&gt;
        &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
        ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/&lt;br /&gt;
        &amp;lt;Directory &amp;quot;/usr/lib/cgi-bin&amp;quot;&amp;gt;&lt;br /&gt;
                AllowOverride None&lt;br /&gt;
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch&lt;br /&gt;
                Order allow,deny&lt;br /&gt;
                Allow from all&lt;br /&gt;
        &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
        ErrorLog /var/log/httpd-error.log&lt;br /&gt;
&lt;br /&gt;
        # Possible values include: debug, info, notice, warn, error, crit,&lt;br /&gt;
        # alert, emerg.&lt;br /&gt;
        LogLevel warn&lt;br /&gt;
&lt;br /&gt;
        CustomLog /var/log/httpd-access.log combined&lt;br /&gt;
&lt;br /&gt;
    Alias /doc/ &amp;quot;/usr/share/doc/&amp;quot;&lt;br /&gt;
    &amp;lt;Directory &amp;quot;/usr/share/doc/&amp;quot;&amp;gt;&lt;br /&gt;
        Options Indexes MultiViews FollowSymLinks&lt;br /&gt;
        AllowOverride None&lt;br /&gt;
        Order deny,allow&lt;br /&gt;
        Deny from all&lt;br /&gt;
        Allow from 127.0.0.0/255.0.0.0 ::1/128&lt;br /&gt;
    &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    SSLEngine on&lt;br /&gt;
    SSLCertificateFile /usr/local/etc/apache22/ssl/server.crt&lt;br /&gt;
    SSLCertificateKeyFile /usr/local/etc/apache22/ssl/server.key&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&lt;br /&gt;
mkdir ssl&lt;br /&gt;
cd ssl&lt;br /&gt;
&lt;br /&gt;
openssl req -days 1999 -new -x509 -nodes -out server.crt -keyout server.key&lt;br /&gt;
US&lt;br /&gt;
CA&lt;br /&gt;
San Diego&lt;br /&gt;
johncompanies.com&lt;br /&gt;
johncompanies.com&lt;br /&gt;
wiki.johncompanies.com&lt;br /&gt;
support@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/www/wiki/.htaccess&lt;br /&gt;
AuthType Basic&lt;br /&gt;
AuthUserFile /usr/local/etc/apache22/wiki.passwd&lt;br /&gt;
AuthName wiki&lt;br /&gt;
require valid-user&lt;br /&gt;
satisfy any&lt;br /&gt;
&lt;br /&gt;
cd /usr/local/etc/apache22&lt;br /&gt;
htpasswd -c wiki.passwd admin&lt;br /&gt;
&lt;br /&gt;
https://69.55.229.8/index.php&lt;br /&gt;
&lt;br /&gt;
use mysql (innodb)&lt;br /&gt;
wiki name: JCWiki&lt;br /&gt;
Support / (mail pass) / support@johncompanies.com&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; /usr/local/www/wiki/LocalSettings.php&lt;br /&gt;
&lt;br /&gt;
&amp;lt;?php&lt;br /&gt;
# This file was automatically generated by the MediaWiki 1.19.2&lt;br /&gt;
# installer. If you make manual changes, please keep track in case you&lt;br /&gt;
# need to recreate them later.&lt;br /&gt;
#&lt;br /&gt;
# See includes/DefaultSettings.php for all configurable settings&lt;br /&gt;
# and their default values, but don&#039;t forget to make changes in _this_&lt;br /&gt;
# file, not there.&lt;br /&gt;
#&lt;br /&gt;
# Further documentation for configuration settings may be found at:&lt;br /&gt;
# http://www.mediawiki.org/wiki/Manual:Configuration_settings&lt;br /&gt;
&lt;br /&gt;
# Protect against web entry&lt;br /&gt;
if ( !defined( &#039;MEDIAWIKI&#039; ) ) {&lt;br /&gt;
	exit;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
## Uncomment this to disable output compression&lt;br /&gt;
# $wgDisableOutputCompression = true;&lt;br /&gt;
&lt;br /&gt;
$wgSitename      = &amp;quot;JCWiki&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## The URL base path to the directory containing the wiki;&lt;br /&gt;
## defaults for all runtime URL paths are based off of this.&lt;br /&gt;
## For more information on customizing the URLs please see:&lt;br /&gt;
## http://www.mediawiki.org/wiki/Manual:Short_URL&lt;br /&gt;
$wgScriptPath       = &amp;quot;&amp;quot;;&lt;br /&gt;
$wgScriptExtension  = &amp;quot;.php&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## The protocol and server name to use in fully-qualified URLs&lt;br /&gt;
$wgServer           = &amp;quot;https://69.55.229.8&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## The relative URL path to the skins directory&lt;br /&gt;
$wgStylePath        = &amp;quot;$wgScriptPath/skins&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## The relative URL path to the logo.  Make sure you change this from the default,&lt;br /&gt;
## or else you&#039;ll overwrite your logo when you upgrade!&lt;br /&gt;
#$wgLogo             = &amp;quot;$wgStylePath/common/images/wiki.png&amp;quot;;&lt;br /&gt;
$wgLogo             = &amp;quot;$wgStylePath/common/images/jclogo.gif&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## UPO means: this is also a user preference option&lt;br /&gt;
&lt;br /&gt;
$wgEnableEmail      = true;&lt;br /&gt;
$wgEnableUserEmail  = true; # UPO&lt;br /&gt;
&lt;br /&gt;
$wgEmergencyContact = &amp;quot;apache@69.55.229.8&amp;quot;;&lt;br /&gt;
$wgPasswordSender   = &amp;quot;apache@69.55.229.8&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
$wgEnotifUserTalk      = false; # UPO&lt;br /&gt;
$wgEnotifWatchlist     = false; # UPO&lt;br /&gt;
$wgEmailAuthentication = true;&lt;br /&gt;
&lt;br /&gt;
## Database settings&lt;br /&gt;
$wgDBtype           = &amp;quot;mysql&amp;quot;;&lt;br /&gt;
$wgDBserver         = &amp;quot;localhost&amp;quot;;&lt;br /&gt;
$wgDBname           = &amp;quot;my_wiki&amp;quot;;&lt;br /&gt;
$wgDBuser           = &amp;quot;root&amp;quot;;&lt;br /&gt;
$wgDBpassword       = &amp;quot;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# MySQL specific settings&lt;br /&gt;
$wgDBprefix         = &amp;quot;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# MySQL table options to use during installation or update&lt;br /&gt;
$wgDBTableOptions   = &amp;quot;ENGINE=InnoDB, DEFAULT CHARSET=binary&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Experimental charset support for MySQL 5.0.&lt;br /&gt;
$wgDBmysql5 = false;&lt;br /&gt;
&lt;br /&gt;
## Shared memory settings&lt;br /&gt;
$wgMainCacheType    = CACHE_NONE;&lt;br /&gt;
$wgMemCachedServers = array();&lt;br /&gt;
&lt;br /&gt;
## To enable image uploads, make sure the &#039;images&#039; directory&lt;br /&gt;
## is writable, then set this to true:&lt;br /&gt;
$wgEnableUploads  = false;&lt;br /&gt;
#$wgUseImageMagick = true;&lt;br /&gt;
#$wgImageMagickConvertCommand = &amp;quot;/usr/bin/convert&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# InstantCommons allows wiki to use images from http://commons.wikimedia.org&lt;br /&gt;
$wgUseInstantCommons  = false;&lt;br /&gt;
&lt;br /&gt;
## If you use ImageMagick (or any other shell command) on a&lt;br /&gt;
## Linux server, this will need to be set to the name of an&lt;br /&gt;
## available UTF-8 locale&lt;br /&gt;
$wgShellLocale = &amp;quot;en_US.utf8&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## If you want to use image uploads under safe mode,&lt;br /&gt;
## create the directories images/archive, images/thumb and&lt;br /&gt;
## images/temp, and make them all writable. Then uncomment&lt;br /&gt;
## this, if it&#039;s not already uncommented:&lt;br /&gt;
#$wgHashedUploadDirectory = false;&lt;br /&gt;
&lt;br /&gt;
## Set $wgCacheDirectory to a writable directory on the web server&lt;br /&gt;
## to make your wiki go slightly faster. The directory should not&lt;br /&gt;
## be publically accessible from the web.&lt;br /&gt;
#$wgCacheDirectory = &amp;quot;$IP/cache&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Site language code, should be one of the list in ./languages/Names.php&lt;br /&gt;
$wgLanguageCode = &amp;quot;en&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
$wgSecretKey = &amp;quot;abc699ef26890b49b4055430f8ebbd25e84cce21a7e53aeaec4d4313af4c9739&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Site upgrade key. Must be set to a string (default provided) to turn on the&lt;br /&gt;
# web installer while LocalSettings.php is in place&lt;br /&gt;
$wgUpgradeKey = &amp;quot;3196710f4a7d7332&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## Default skin: you can change the default skin. Use the internal symbolic&lt;br /&gt;
## names, ie &#039;standard&#039;, &#039;nostalgia&#039;, &#039;cologneblue&#039;, &#039;monobook&#039;, &#039;vector&#039;:&lt;br /&gt;
$wgDefaultSkin = &amp;quot;vector&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
## For attaching licensing metadata to pages, and displaying an&lt;br /&gt;
## appropriate copyright notice / icon. GNU Free Documentation&lt;br /&gt;
## License and Creative Commons licenses are supported so far.&lt;br /&gt;
$wgRightsPage = &amp;quot;&amp;quot;; # Set to the title of a wiki page that describes your license/copyright&lt;br /&gt;
$wgRightsUrl  = &amp;quot;&amp;quot;;&lt;br /&gt;
$wgRightsText = &amp;quot;&amp;quot;;&lt;br /&gt;
$wgRightsIcon = &amp;quot;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Path to the GNU diff3 utility. Used for conflict resolution.&lt;br /&gt;
$wgDiff3 = &amp;quot;/usr/bin/diff3&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
# Query string length limit for ResourceLoader. You should only set this if&lt;br /&gt;
# your web server has a query string length limit (then set it to that limit),&lt;br /&gt;
# or if you have suhosin.get.max_value_length set in php.ini (then set it to&lt;br /&gt;
# that value)&lt;br /&gt;
$wgResourceLoaderMaxQueryLength = -1;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# End of automatically generated settings.&lt;br /&gt;
# Add more configuration options below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== copy/backup wiki ==&lt;br /&gt;
on main/primary wiki:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/etc/rc.d/mysql-server stop&lt;br /&gt;
ssh 69.55.230.18 &amp;quot;/etc/init.d/mysql stop&amp;quot;&lt;br /&gt;
rsync -av /var/db/mysql/my_wiki/ 69.55.230.18:/var/lib/mysql/my_wiki/&lt;br /&gt;
rsync -av /var/db/mysql/ib* 69.55.230.18:/var/lib/mysql/&lt;br /&gt;
/usr/local/etc/rc.d/mysql-server start&lt;br /&gt;
ssh 69.55.230.18 &amp;quot;/etc/init.d/mysql start&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2207</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2207"/>
		<updated>2020-02-25T18:38:08Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Check NetHere */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== Check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers.  To get into the servers, login to&lt;br /&gt;
admin-1.nethere.net and su - to root.&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
To clean up outgoing queues of unwanted SPAM on relay-1 and relay-2.&lt;br /&gt;
&lt;br /&gt;
  mail_cleanup [ &amp;lt;sender&#039;s domain/username/message id&amp;gt; ... ]&lt;br /&gt;
&lt;br /&gt;
To just remove emails from some senders.&lt;br /&gt;
&lt;br /&gt;
  rmmails &amp;lt;sender&#039;s domain/username/message id&amp;gt; ...&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
=== Cacti ===&lt;br /&gt;
&lt;br /&gt;
Check bandwidth usage on servers&lt;br /&gt;
&lt;br /&gt;
  https://cacti.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2206</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2206"/>
		<updated>2020-02-25T18:36:11Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Outgoing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== Check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
To clean up outgoing queues of unwanted SPAM on relay-1 and relay-2.&lt;br /&gt;
&lt;br /&gt;
  mail_cleanup [ &amp;lt;sender&#039;s domain/username/message id&amp;gt; ... ]&lt;br /&gt;
&lt;br /&gt;
To just remove emails from some senders.&lt;br /&gt;
&lt;br /&gt;
  rmmails &amp;lt;sender&#039;s domain/username/message id&amp;gt; ...&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
=== Cacti ===&lt;br /&gt;
&lt;br /&gt;
Check bandwidth usage on servers&lt;br /&gt;
&lt;br /&gt;
  https://cacti.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2205</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2205"/>
		<updated>2020-02-25T18:02:40Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* check NetHere */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== Check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
To clean up outgoing queues of unwanted SPAM on relay-1 and relay-2.&lt;br /&gt;
&lt;br /&gt;
  mail_cleanup [&amp;lt;sender&#039;s domain&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
=== Cacti ===&lt;br /&gt;
&lt;br /&gt;
Check bandwidth usage on servers&lt;br /&gt;
&lt;br /&gt;
  https://cacti.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2204</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2204"/>
		<updated>2020-02-25T17:59:05Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Outgoing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
To clean up outgoing queues of unwanted SPAM on relay-1 and relay-2.&lt;br /&gt;
&lt;br /&gt;
  mail_cleanup [&amp;lt;sender&#039;s domain&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
=== Cacti ===&lt;br /&gt;
&lt;br /&gt;
Check bandwidth usage on servers&lt;br /&gt;
&lt;br /&gt;
  https://cacti.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2203</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2203"/>
		<updated>2020-02-25T17:57:50Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Outgoing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
To clean up outgoing queues of unwanted SPAM.&lt;br /&gt;
&lt;br /&gt;
  mail_cleanup [&amp;lt;sender&#039;s domain&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
=== Cacti ===&lt;br /&gt;
&lt;br /&gt;
Check bandwidth usage on servers&lt;br /&gt;
&lt;br /&gt;
  https://cacti.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2202</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2202"/>
		<updated>2020-02-25T17:53:27Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* Nagios */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
=== Cacti ===&lt;br /&gt;
&lt;br /&gt;
Check bandwidth usage on servers&lt;br /&gt;
&lt;br /&gt;
  https://cacti.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2201</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2201"/>
		<updated>2020-02-25T17:50:43Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* check jail5 for crashed VPSs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
== check NetHere  ==&lt;br /&gt;
&lt;br /&gt;
Check the NetHere servers&lt;br /&gt;
&lt;br /&gt;
=== Mail systems ===&lt;br /&gt;
Check for possible SPAMMERs.&lt;br /&gt;
&lt;br /&gt;
==== Incoming ====&lt;br /&gt;
Check mta-1 and mta-2 count of customer logins for possible&lt;br /&gt;
customer SPAM compromises.&lt;br /&gt;
&lt;br /&gt;
  login_count /logs/maillog | tail -30&lt;br /&gt;
  &lt;br /&gt;
==== Outgoing ====&lt;br /&gt;
&lt;br /&gt;
Check outgoing queues on relay-1 and relay-2&lt;br /&gt;
  &lt;br /&gt;
  mail_count | tail -30&lt;br /&gt;
&lt;br /&gt;
=== Nagios ===&lt;br /&gt;
&lt;br /&gt;
Check for other problems on NetHere servers&lt;br /&gt;
&lt;br /&gt;
  https://nagios.nethere.net&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2200</id>
		<title>Routine Maintenance</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=Routine_Maintenance&amp;diff=2200"/>
		<updated>2020-02-25T17:41:33Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* check bb for warnings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Daily Tasks =&lt;br /&gt;
&lt;br /&gt;
== check load graphs == &lt;br /&gt;
&lt;br /&gt;
Click on the Load link in mgmt &lt;br /&gt;
&lt;br /&gt;
This screen shows you load levels on our servers and network traffic for critical machines (firewalls, backup servers).&lt;br /&gt;
&lt;br /&gt;
If you see load high or increasing &lt;br /&gt;
&lt;br /&gt;
FreeBSD: &lt;br /&gt;
run [[VPS_Management#jtop|jtop]] (or [[VPS_Management#jt|jt]] &amp;gt; 7.x) and see if there are any runaway processes.  Here are some examples of entries in top that are&lt;br /&gt;
definitely runaway processes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;79481 root      64   0  2256K  1056K CPU1   1  58:16 87.40% 87.40% nano&lt;br /&gt;
50650   1000    64   0  1852K  1112K RUN    0 207.9H 84.08% 84.08% screen&lt;br /&gt;
14829 www        2   0 39100K 31736K accept 0  104:24  46.54%  6.54% httpd&lt;br /&gt;
42065 root      61   0  1300K   844K RUN    1  47.8H 91.36% 91.36% ee&lt;br /&gt;
1328 www       56   0 18440K 10796K CPU1   0  64.4H 97.71% 97.71% httpd&lt;br /&gt;
26251 user      57   0  6124K  1160K CPU1   1  82.9H 98.44% 98.44% screen&lt;br /&gt;
89874 root      60   0  1352K   892K RUN    1  33.8H 65.82% 65.82% dialog&lt;br /&gt;
38656   1000    64   0  3088K  2136K CPU0   0 806:13 97.95% 97.95% StutBot&lt;br /&gt;
27630 root      64   0  1396K   972K RUN    1  76.8H 86.47% 86.47% ee&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux:&lt;br /&gt;
run [[VPS_Management#vwe|vwe]] to see which VPS’s have high loads. From there run &amp;lt;tt&amp;gt;[[VPS_Management#vp|vp]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; and/or &amp;lt;tt&amp;gt;[[VPS_Management#vt|vt]] &amp;lt;veid&amp;gt;&amp;lt;/tt&amp;gt; to see what&#039;s going on in that system.&lt;br /&gt;
[[VPS_Management#vzstat|vzstat]] will also give you a nice picture of whats going on, systems with high numbers in the mlat column are likely culprits.&lt;br /&gt;
&lt;br /&gt;
examples of out of control procs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12183 nobody    16   0  4916 1348  1340 R    45.5  0.0  4249m httpd&lt;br /&gt;
29266 #502      16   0  1852  796   792 R    22.5  0.0  1104m vim&lt;br /&gt;
23860 #41       16   0  5472 5472  2076 R    98.9  0.2  31:41 python&lt;br /&gt;
19227 bin       19   0  1688  716   652 R    99.9  0.0 321:08 wtrs_ui&lt;br /&gt;
 7762 apache    16   0   268  236   224 R    85.7  0.0  1010m ptrace&lt;br /&gt;
 4624 #501      20   0  4304 2400  2044 R    53.6  0.1 284:32 YoSucker&lt;br /&gt;
20451 #506      20   0  1876  820   816 R    17.2  0.0 169:35 vim&lt;br /&gt;
 8834 #514      20   0   900  724   672 R    77.6  0.0 382:30 neostats&lt;br /&gt;
31815 apache    14   0  3176 3176  1696 R    74.4  0.1   6:15 counter&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just kill -9 them and be done with it.&lt;br /&gt;
&lt;br /&gt;
Also, anytime you see `kmod` or `ptrace` - kill those immediaely no matter how much they are using - they are attempts to exploit the linux ptrace bug.  They won&#039;t work, but they suck a lot of CPU...&lt;br /&gt;
&lt;br /&gt;
Also, any other processes that are at 90-100% cpu usage and have been running for any long period of time should be killed except for mysqld processes on FreeBSD.  See above.&lt;br /&gt;
&lt;br /&gt;
However, there is an exception:&lt;br /&gt;
&lt;br /&gt;
if it is a mysqld, we don&#039;t want to kill their database.  What you want to do is &amp;lt;tt&amp;gt;[[VPS_Management#jpid|jpid]] &amp;lt;pid&amp;gt;&amp;lt;/tt&amp;gt; to see who owns it, and then email them the paste containing the instructions for the nanny. Or you can simply do a &amp;lt;tt&amp;gt;kill -1 PID&amp;lt;/tt&amp;gt; on the process to restart it.&lt;br /&gt;
&lt;br /&gt;
=== Load averages jump at night ===&lt;br /&gt;
&lt;br /&gt;
The load averages on the FreeBSD systems may jump up at night between 1 and 4 am - this is because the backups are running - if this is what is causing a jump in load, you will see processes like `rsync` in top eating a lot of CPU time.&lt;br /&gt;
&lt;br /&gt;
== check backups ==&lt;br /&gt;
&lt;br /&gt;
mgmt -&amp;gt; Motnroing -&amp;gt; Backups and make sure every machine was backed up the previous nite. &lt;br /&gt;
Also look at df on backup1 and backup2 to make sure no disk is approaching full, though bb should warn us in advance. Please note - errors encountered when a backup script on any of the particular systems run will gnerate an email to support@johncompanies.com so you can know immediately the day after if the directory to be backed up has been moved or no longer exists. A paste exists for this to notify the customer of a non-existant file/dir.&lt;br /&gt;
&lt;br /&gt;
== check bb for warnings  ==&lt;br /&gt;
mgmt -&amp;gt; BigBrother&lt;br /&gt;
&lt;br /&gt;
Some events don&#039;t generate pages (on purpose). You will only see them by going to the bb main page.&lt;br /&gt;
&lt;br /&gt;
== check jail5 for crashed VPSs  ==&lt;br /&gt;
&lt;br /&gt;
On jail5&lt;br /&gt;
  &lt;br /&gt;
  notrunning&lt;br /&gt;
&lt;br /&gt;
To restart a VPS &lt;br /&gt;
&lt;br /&gt;
  vm restart col0XXXX&lt;br /&gt;
&lt;br /&gt;
= Monthly Tasks =&lt;br /&gt;
== rotate pine sent mail (1st of month) ==&lt;br /&gt;
On the 1st of the month, before any emails are sent out, quit out of pine, then log back in. Send mail from last month will be archived.&lt;br /&gt;
If you mess up and do it on the 3rd (for example), you can go into the previous month&#039;s saved email and save emails from the current month into the &amp;lt;tt&amp;gt;sent-mail&amp;lt;/tt&amp;gt; (current month) mailbox.&lt;br /&gt;
&lt;br /&gt;
== b/w caps ==&lt;br /&gt;
On the 1st: remove any bwcaps put into the firewall (only really applies if a bwcap was added cause someone went over on b/w):&lt;br /&gt;
&lt;br /&gt;
 ipfw list|grep pipe&lt;br /&gt;
 ipfw del [each rule listed]&lt;br /&gt;
&lt;br /&gt;
NOTE: this cronjob on newgateway will do some of that for you, provided you used one of the following pipe #s:&lt;br /&gt;
 0 0 1 * * /sbin/ipfw del 3  4 5 17331&lt;br /&gt;
&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
We really don’t do this anymore since we have centralized traffic accounting with netflow, but for posterity:&lt;br /&gt;
&lt;br /&gt;
Make sure all machines reset counters to 0 after midnight on the 1st&lt;br /&gt;
Make sure they dumped a counter&lt;br /&gt;
&lt;br /&gt;
On each jail run:&lt;br /&gt;
 trafficgather.pl&lt;br /&gt;
&lt;br /&gt;
And on each virt:&lt;br /&gt;
 linuxtrafficgather.pl&lt;br /&gt;
&lt;br /&gt;
== Monthly RAID checks ==&lt;br /&gt;
&lt;br /&gt;
Every month we check the health of and verfy the parity on all our RAID-based systems.&lt;br /&gt;
To facilitate this, we&#039;ve created a simple script to start the process:&lt;br /&gt;
&lt;br /&gt;
 sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
=== Adaptec controllers ===&lt;br /&gt;
Here&#039;s some sample output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /usr/local/www/scripts# sh /root/verify.sh&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk list /f&lt;br /&gt;
Executing: disk list /full=TRUE&lt;br /&gt;
&lt;br /&gt;
B:ID:L  Device Type     Removable media  Vendor-ID Product-ID        Rev   Blocks    Bytes/Bl&lt;br /&gt;
ock Usage            Shared Rate&lt;br /&gt;
------  --------------  ---------------  --------- ----------------  ----- --------- --------&lt;br /&gt;
--- ---------------- ------ ----&lt;br /&gt;
0:00:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:01:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:02:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
0:03:0   Disk            N                FUJITSU   MAJ3364MC         3702  71390320  512&lt;br /&gt;
     Initialized      NO     160&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; dia sh hi&lt;br /&gt;
Executing: diagnostic show history&lt;br /&gt;
No switches specified, defaulting to &amp;quot;/current&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 *** HISTORY BUFFER FROM CURRENT CONTROLLER RUN ***&lt;br /&gt;
&lt;br /&gt;
[00]: GetDiskLogEntry: container - 1, entry return 0&lt;br /&gt;
[01]: Container 1 started SCRUB task&lt;br /&gt;
[02]: Starting Mirror:1 scrub&lt;br /&gt;
[03]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[04]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[05]: UpdateDiskLogIndex - Set   - container 0, index 1&lt;br /&gt;
[06]: GetDiskLogEntry: container - 0, entry return 1&lt;br /&gt;
[07]: Container 0 started SCRUB task&lt;br /&gt;
[08]: Starting Mirror:0 scrub&lt;br /&gt;
[09]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[10]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[11]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[12]: Clear disk log: sector - 80, driveno 2&lt;br /&gt;
[13]: Clear disk log: sector - 80, driveno 3&lt;br /&gt;
[14]: Container 1 completed SCRUB task:&lt;br /&gt;
[15]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[16]: Clear disk log: sector - 81, driveno 1&lt;br /&gt;
[17]: Clear disk log: sector - 81, driveno 0&lt;br /&gt;
[18]: Container 0 completed SCRUB task:&lt;br /&gt;
[19]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[20]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[21]: Container 0 started SCRUB task&lt;br /&gt;
[22]: Starting Mirror:0 scrub&lt;br /&gt;
[23]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[24]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[25]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[26]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[27]: Container 1 started SCRUB task&lt;br /&gt;
[28]: Starting Mirror:1 scrub&lt;br /&gt;
[29]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[30]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[31]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[32]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[33]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[34]: Container 1 completed SCRUB task:&lt;br /&gt;
[35]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[36]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[37]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[38]: Container 0 completed SCRUB task:&lt;br /&gt;
[39]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[40]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[41]: Container 0 started SCRUB task&lt;br /&gt;
[42]: Starting Mirror:0 scrub&lt;br /&gt;
[43]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[44]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[45]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[46]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[47]: Container 1 started SCRUB task&lt;br /&gt;
[48]: Starting Mirror:1 scrub&lt;br /&gt;
[49]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[50]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[51]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[52]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[53]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[54]: Container 1 completed SCRUB task:&lt;br /&gt;
[55]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[56]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[57]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[58]: Container 0 completed SCRUB task:&lt;br /&gt;
[59]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[60]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[61]: Container 0 started SCRUB task&lt;br /&gt;
[62]: Starting Mirror:0 scrub&lt;br /&gt;
[63]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[64]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[65]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[66]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[67]: Container 1 started SCRUB task&lt;br /&gt;
[68]: Starting Mirror:1 scrub&lt;br /&gt;
[69]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[70]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[71]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[72]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[73]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[74]: Container 1 completed SCRUB task:&lt;br /&gt;
[75]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[76]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[77]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[78]: Container 0 completed SCRUB task:&lt;br /&gt;
[79]: UpdateDiskLogIndex - Set   - container 0, index 0&lt;br /&gt;
[80]: GetDiskLogEntry: container - 0, entry return 0&lt;br /&gt;
[81]: Container 0 started SCRUB task&lt;br /&gt;
[82]: Starting Mirror:0 scrub&lt;br /&gt;
[83]: Master disk: 1, start sector: 128, sector count = 71286784&lt;br /&gt;
[84]: Slave  disk: 0, start sector: 128, sector count = 71286784&lt;br /&gt;
[85]: UpdateDiskLogIndex - Set   - container 1, index 1&lt;br /&gt;
[86]: GetDiskLogEntry: container - 1, entry return 1&lt;br /&gt;
[87]: Container 1 started SCRUB task&lt;br /&gt;
[88]: Starting Mirror:1 scrub&lt;br /&gt;
[89]: Master disk: 2, start sector: 128, sector count = 71286784&lt;br /&gt;
[90]: Slave  disk: 3, start sector: 128, sector count = 71286784&lt;br /&gt;
[91]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[92]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[93]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[94]: Container 1 completed SCRUB task:&lt;br /&gt;
[95]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[96]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[97]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[98]: Container 0 completed SCRUB task:&lt;br /&gt;
[99]:&lt;br /&gt;
&lt;br /&gt;
========================&lt;br /&gt;
History Output Complete.&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt;&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
press enter when ready to run verify                                                 &amp;lt;INS&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s an analysis of what we&#039;re seeing and what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; container list /f&lt;br /&gt;
Executing: container list /full=TRUE&lt;br /&gt;
Num          Total  Oth Chunk          Scsi   Partition&lt;br /&gt;
Creation        System&lt;br /&gt;
Label Type   Size   Ctr Size   Usage   B:ID:L Offset:Size   State   RO Lk Task    Done%  Ent&lt;br /&gt;
Date   Time      Files&lt;br /&gt;
----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ ---&lt;br /&gt;
------ -------- ------&lt;br /&gt;
 0    Mirror 33.9GB            Open    0:01:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
 /dev/aacd0           mirror0          0:00:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:32&lt;br /&gt;
&lt;br /&gt;
 1    Mirror 33.9GB            Open    0:02:0 64.0KB:33.9GB Normal                        0&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
 /dev/aacd1           mirror1          0:03:0 64.0KB:33.9GB Normal                        1&lt;br /&gt;
071002 05:39:50&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is showing you the health of the arrays. You&#039;re looking for &#039;&#039;Normal&#039;&#039; under the State column, and the absence of a ! in the sector size - sometimes, you&#039;ll see this:&lt;br /&gt;
 64.0KB!33.9GB &lt;br /&gt;
&lt;br /&gt;
That indicates a problem.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; disk show smart&lt;br /&gt;
Executing: disk show smart&lt;br /&gt;
&lt;br /&gt;
        Smart    Method of         Enable&lt;br /&gt;
        Capable  Informational     Exception  Performance  Error&lt;br /&gt;
B:ID:L  Device   Exceptions(MRIE)  Control    Enabled      Count&lt;br /&gt;
------  -------  ----------------  ---------  -----------  ------&lt;br /&gt;
0:00:0     Y            6             Y           N             0&lt;br /&gt;
0:01:0     Y            6             Y           N             0&lt;br /&gt;
0:02:0     Y            6             Y           N             0&lt;br /&gt;
0:03:0     Y            6             Y           N             0&lt;br /&gt;
0:06:0     N&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This shows you a SMART report output. Looking for values in the Error Count column.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AAC0&amp;gt; task list&lt;br /&gt;
Executing: task list&lt;br /&gt;
&lt;br /&gt;
Controller Tasks&lt;br /&gt;
&lt;br /&gt;
TaskId Function  Done%  Container State Specific1 Specific2&lt;br /&gt;
------ -------- ------- --------- ----- --------- ---------&lt;br /&gt;
&lt;br /&gt;
No tasks currently running on controller&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for absence of tasks running- a bad thing would be to see a rebuild or verify running when you didn&#039;t initiate it.&lt;br /&gt;
&lt;br /&gt;
With the history output, you&#039;re looking for any anomalies or events since the last time a verify was run. If you see a drive with lots of problems, you may want to take backups before allowing the verify to run since it could replicate errors onto the good drive.&lt;br /&gt;
&lt;br /&gt;
After you see the history output, it will prompt you to press enter to run the verify. If you&#039;re happy with all the output you&#039;re seeing- mirror is healthy, history looks good, it&#039;s safe to proceed. Otherwise ^C to exit. After hitting enter it will start the verify and start to tail the messages log file (so you can easily see when the verify is complete). Here&#039;s what that&#039;ll look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Nov  1 14:38:08 mail /kernel: aac0: **Monitor** Container 1 completed SCRUB task:&lt;br /&gt;
Nov  1 14:46:45 mail /kernel: aac0: **Monitor** Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So, putting it all together, after hitting enter to start the verify, you&#039;ll see:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Adaptec SCSI RAID Controller Command Line Interface&lt;br /&gt;
Copyright 1998-2002 Adaptec, Inc. All rights reserved&lt;br /&gt;
---------------------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CLI &amp;gt; open aac0&lt;br /&gt;
Executing: open &amp;quot;aac0&amp;quot;&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 0&lt;br /&gt;
Executing: container scrub 0&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; contai scr 1&lt;br /&gt;
Executing: container scrub 1&lt;br /&gt;
&lt;br /&gt;
AAC0&amp;gt; exit&lt;br /&gt;
Executing: exit&lt;br /&gt;
&lt;br /&gt;
when done run:                                                                       &lt;br /&gt;
&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nov  1 10:32:46 mail /kernel: aac0: **Monitor** Container 0 started SCRUB task&lt;br /&gt;
Nov  1 10:32:47 mail /kernel: aac0: **Monitor** Container 1 started SCRUB task&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the scrub(s) (verify) are complete - if the server has multiple logical drives, it will run both in parallel - you should exit the tail of the log file (^C) and run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
aaccli&lt;br /&gt;
open aac0&lt;br /&gt;
dia sh hi&lt;br /&gt;
c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which will show you the diagnostic history, you&#039;re looking for the results of the most recent scrub:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[100]: Mirror Scrub Container:1   ErrorsFound:0&lt;br /&gt;
[101]: Clear disk log: sector - 81, driveno 2&lt;br /&gt;
[102]: Clear disk log: sector - 81, driveno 3&lt;br /&gt;
[103]: Container 1 completed SCRUB task:&lt;br /&gt;
[104]: Mirror Scrub Container:0   ErrorsFound:0&lt;br /&gt;
[105]: Clear disk log: sector - 80, driveno 1&lt;br /&gt;
[106]: Clear disk log: sector - 80, driveno 0&lt;br /&gt;
[107]: Container 0 completed SCRUB task:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
^C to exit the RAID CLI.&lt;br /&gt;
&lt;br /&gt;
If you see:&lt;br /&gt;
&lt;br /&gt;
 [104]: Mirror Scrub Container:0   ErrorsFound:5&lt;br /&gt;
&lt;br /&gt;
You&#039;ll want to rerun the verify on that drive till it shows 0, or perhaps replace the drive- you should be able to see from the output which drive had the problem.&lt;br /&gt;
&lt;br /&gt;
Depending on the size and how busy the drive is, the verify can take anywhere from an hour to the better part of a day.&lt;br /&gt;
&lt;br /&gt;
You will notice that the diagnostic history is not shown on our modern adaptec cards (i.e. any adaptec card not in a Dell 2450). The reason for this is the history is never cleared, so there&#039;s simply too much data to show and it just crashes the CLI. So, don&#039;t bother trying to see it...which does make it hard to see if there are problems going on, so you just need to watch the scrub to see it goes to 100%. You will also notice that on some servers there&#039;s no tail of messages. Again, this is cause no data is shown there about the completion of the scrub. The thing to do here is to go into the CLI and continue to &amp;lt;tt&amp;gt;show tasks&amp;lt;/tt&amp;gt; to monitor scrub progress.&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#Adaptec|Adaptec RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== DELL (LSI-based) SAS controllers ===&lt;br /&gt;
&lt;br /&gt;
Here&#039;s what the output looks like when running verify.sh on a LSI-based card:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
jail2 /mnt/data2# sh /root/verify.sh&lt;br /&gt;
&lt;br /&gt;
Adapter #0&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 1&lt;br /&gt;
Device Id: 1&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018395db2&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 1(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DERV&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 2&lt;br /&gt;
Device Id: 2&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eece89&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 2(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BFBZ&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 3&lt;br /&gt;
Device Id: 3&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50006eee035&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 3(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300555SS     T2113LM4BGF7&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 4&lt;br /&gt;
Device Id: 4&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x5000c50004bd7ea5&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 4(path0)&lt;br /&gt;
Inquiry Data: SEAGATE ST3300656SS     HS093QP0G8SW&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 5&lt;br /&gt;
Device Id: 5&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 286102MB [0x22ecb25c Sectors]&lt;br /&gt;
Non Coerced Size: 285590MB [0x22dcb25c Sectors]&lt;br /&gt;
Coerced Size: 285568MB [0x22dc0000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e01f1c4112&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 5(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MBA3300RC       D306BJ15P9201W06&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 0 (Target Id: 0)&lt;br /&gt;
Name:&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:139392MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 2 (Target Id: 2)&lt;br /&gt;
Name:MIRROR2&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before pressing enter, here&#039;s what we&#039;re looking for:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Enclosure Device ID: 32&lt;br /&gt;
Slot Number: 0&lt;br /&gt;
Device Id: 0&lt;br /&gt;
Sequence Number: 2&lt;br /&gt;
Media Error Count: 0&lt;br /&gt;
Other Error Count: 0&lt;br /&gt;
Predictive Failure Count: 0&lt;br /&gt;
Last Predictive Failure Event Seq Number: 0&lt;br /&gt;
PD Type: SAS&lt;br /&gt;
Raw Size: 140014MB [0x11177328 Sectors]&lt;br /&gt;
Non Coerced Size: 139502MB [0x11077328 Sectors]&lt;br /&gt;
Coerced Size: 139392MB [0x11040000 Sectors]&lt;br /&gt;
Firmware state: Online&lt;br /&gt;
SAS Address(0): 0x500000e018396142&lt;br /&gt;
SAS Address(1): 0x0&lt;br /&gt;
Connected Port Number: 0(path0)&lt;br /&gt;
Inquiry Data: FUJITSU MAX3147RC       D207DQ03P7A0DESN&lt;br /&gt;
Foreign State: None&lt;br /&gt;
Media Type: Hard Disk Device&lt;br /&gt;
Device Speed: Unknown&lt;br /&gt;
Link Speed: Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output shown for each physical drive in the system. We&#039;re looking to confirm it&#039;s Firmware state is &#039;&#039;Online&#039;&#039;, and Media Error Count, Other Error Count, and Predictive Failure Count are all zero (or near zero).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Adapter 0 -- Virtual Drive Information:&lt;br /&gt;
Virtual Disk: 1 (Target Id: 1)&lt;br /&gt;
Name:MIRROR1&lt;br /&gt;
RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0&lt;br /&gt;
Size:285568MB&lt;br /&gt;
State: Optimal&lt;br /&gt;
Stripe Size: 64kB&lt;br /&gt;
Number Of Drives:2&lt;br /&gt;
Span Depth:1&lt;br /&gt;
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&lt;br /&gt;
Access Policy: Read/Write&lt;br /&gt;
Disk Cache Policy: Disk&#039;s Default&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is the output for each logical drive. We&#039;re looking for State &#039;&#039;Optimal&#039;&#039;. Also confirm &amp;lt;tt&amp;gt;Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
Battery FRU     : N/A&lt;br /&gt;
Battery Warning                  : Enabled&lt;br /&gt;
Memory Correctable Errors   : 0&lt;br /&gt;
Memory Uncorrectable Errors : 0&lt;br /&gt;
BBU             : Present&lt;br /&gt;
BBU                             : Yes&lt;br /&gt;
Cache When BBU Bad               : Disabled&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Confirm that the battery is present and error-free.&lt;br /&gt;
&lt;br /&gt;
If all that checks out, you&#039;re ready to proceed with the verify. After pressing enter, the verify is started and here&#039;s what you see:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Start Check Consistency on Virtual Drive 0 (target id: 0) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 1 (target id: 1) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
Start Check Consistency on Virtual Drive 2 (target id: 2) Success.&lt;br /&gt;
&lt;br /&gt;
Exit Code: 0x00&lt;br /&gt;
&lt;br /&gt;
  Check Consistency&lt;br /&gt;
&lt;br /&gt;
 Progress of Virtual Drives...&lt;br /&gt;
&lt;br /&gt;
  Virtual Drive #              Percent Complete                       Time Elps&lt;br /&gt;
          0         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:03&lt;br /&gt;
          1         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:02&lt;br /&gt;
          2         °°°°°°°°°°°°°°°°°°°°°°°00 %°°°°°°°°°°°°°°°°°°°°°°° 00:00:01&lt;br /&gt;
&lt;br /&gt;
    Press &amp;lt;ESC&amp;gt; key to quit...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The progress for each drive is displayed until all drives have completed the verify. We just want to make sure that each drive goes to completion. No followup is needed...though there probably is a log or history where we can get more info.&lt;br /&gt;
&lt;br /&gt;
You will notice that jail7 does not run a verify- that&#039;s on purpose. The last time we tried this it crashed the system. So, this must be run from the BIOS (take the system offline for a couple hours).&lt;br /&gt;
&lt;br /&gt;
See [[RAIC_CLI#LSI|LSI RAID CLI Reference]] for more details on how to use the CLI&lt;br /&gt;
&lt;br /&gt;
=== LSI-based controllers (megaraid) ===&lt;br /&gt;
&lt;br /&gt;
There is a CLI for this however it&#039;s easier to do this with a curses GUI app: megaraid&lt;br /&gt;
&lt;br /&gt;
Currently only on these servers: virt15, virt16, and firewall2&lt;br /&gt;
&lt;br /&gt;
To run:&lt;br /&gt;
&lt;br /&gt;
 # cd /usr/local/sbin/; megamgr&lt;br /&gt;
&lt;br /&gt;
Main menu:&lt;br /&gt;
&amp;lt;pre&amp;gt;²ÚÄÄManagement MenuÄÄ¿²&lt;br /&gt;
²³ Configure         ³²&lt;br /&gt;
²³ Initialize        ³²&lt;br /&gt;
²³ Objects           ³²&lt;br /&gt;
²³ Rebuild           ³²&lt;br /&gt;
²³ Check Consistency ³²&lt;br /&gt;
²³ Advanced Menu     ³²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before you check consistency, make sure the arrays are healthy.&lt;br /&gt;
&lt;br /&gt;
Objects -&amp;gt; Physical Drive&lt;br /&gt;
&lt;br /&gt;
Then look to make sure they&#039;re all &amp;lt;tt&amp;gt;ONLIN&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄObjects - PHYSICAL DRIVE SELECTION MENUÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²&lt;br /&gt;
²²²²²²²²³                                                                ³²²²²²²&lt;br /&gt;
²²²²²²²²³                         Channel-1                              ³²²²²²²&lt;br /&gt;
²ÚÄÄMana³                     ID ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»x                       ³²²²²²²&lt;br /&gt;
²³ Confi³                       0º* ONLIN A01-01º                        ³²²²²²²&lt;br /&gt;
²³ Initi³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Objec³                       1º* ONLIN A01-02º                        ³²²²²²²&lt;br /&gt;
²³ Rebui³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²³ Check³                       2º* ONLIN A02-01º                        ³²²²²²²&lt;br /&gt;
²³ Advan³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄ³                       3º* ONLIN A02-02º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       4º* ONLIN A03-01º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       5º* ONLIN A03-02ºþ                       ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÌÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¹                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                       6º*             º                        ³²²²²²²&lt;br /&gt;
²²²²²²²²³                        ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍ¼x                       ³²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²&lt;br /&gt;
²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²&lt;br /&gt;
²²²²²²²²³Ch-1 ID-5  DISK      140013MB  SEAGATE  ST3146707LC      0003 ³²²²²²²²²&lt;br /&gt;
²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once that&#039;s done, hit escape once then the back arror to move back to the Objects menu.&lt;br /&gt;
So you select Objects -&amp;gt; Logical Drive -&amp;gt; Logical Drive 1 -&amp;gt; Check Consistency -&amp;gt; YES&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄLogical Drives(02)Ä¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Logical Drive 1    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²ÚÄÄÄÄObjectsÄÄÄ³ Logical Drive 2    ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÚÄÄManagemen³ Adapter      ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Configure ³ Logical Drive  ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Initialize³ Physical Drive ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Objects   ³ Channel        ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Rebuild   ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Check Consistency ³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²³ Advanced Menu     ³²²²²²²²ÚÄÄÄÄLogical Drive 1ÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²³ Initialize    ÚÄCheck Consistency-1  ?Ä¿²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ Check Consiste³   YES                  ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³ View/Update Pa³   NO                   ³²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²³Select YES Or NO³²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²²&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then watch the progress. When done, escape back to Logical Drive then repeat for Logical Drive 2. If you ^C or accidentally escape out, you can come back in running the same commands and watch the progress again (it won&#039;t restart).&lt;br /&gt;
&lt;br /&gt;
You can exit megamgr by escaping out or ^C&lt;br /&gt;
&lt;br /&gt;
=== 3ware ===&lt;br /&gt;
&lt;br /&gt;
We are using 3ware controllers on backup1 &amp;amp; backup2. Running the verify script will give you different output based on the type of controller:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
backup2 /d2# sh /root/verify.sh&lt;br /&gt;
Controller: c0&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180172&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 3&lt;br /&gt;
        Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK&lt;br /&gt;
        Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED&lt;br /&gt;
        Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 2: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1)&lt;br /&gt;
        Port 4: WDC WD2000  0.00 MB (0 blocks): OK(NO UNIT)&lt;br /&gt;
        Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)&lt;br /&gt;
Controller: c1&lt;br /&gt;
-------------&lt;br /&gt;
Driver:   1.50.01.002&lt;br /&gt;
Model:    7500-8&lt;br /&gt;
FW:       FE7X 1.05.00.068&lt;br /&gt;
BIOS:     BE7X 1.08.00.048&lt;br /&gt;
Monitor:  ME7X 1.01.00.040&lt;br /&gt;
Serial #: F11605A3180167&lt;br /&gt;
PCB:      Rev3&lt;br /&gt;
PCHIP:    1.30-33&lt;br /&gt;
ACHIP:    3.20&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# of units: 2&lt;br /&gt;
        Unit 0: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
        Unit 4: RAID 5 698.65 GB ( 1465185024 blocks): OK&lt;br /&gt;
&lt;br /&gt;
# of ports: 8&lt;br /&gt;
        Port 0: WDC WD2500JB-00GVA0 WD-WMAL71301258 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 1: WDC WD2500JB-00GVA0 WD-WMAL71322705 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 2: WDC WD2500JB-00GVA0 WD-WMAL71945050 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 3: WDC WD2500JB-00GVA0 WD-WMAL71316201 232.88 GB (488397168 blocks): OK(unit 0)&lt;br /&gt;
        Port 4: WDC WD2500JB-00GVC0 WD-WCAL78323749 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 5: WDC WD3200AAJB-00J3A0 WD-WCAV2V689068 298.09 GB (625142448 blocks): OK(unit 4)&lt;br /&gt;
        Port 6: WDC WD2500JB-00GVC0 WD-WCAL78234420 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
        Port 7: WDC WD2500JB-00GVC0 WD-WCAL78592213 232.88 GB (488397168 blocks): OK(unit 4)&lt;br /&gt;
backup2 /d2#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On backup2 look for all ok, no verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup3 ~]# sh /root/verify.sh&lt;br /&gt;
/c2 Driver Version = 1.26.02.002&lt;br /&gt;
/c2 Model = 8006-2LP&lt;br /&gt;
/c2 Available Memory = 512KB&lt;br /&gt;
/c2 Firmware Version = FE8S 1.05.00.068&lt;br /&gt;
/c2 Bios Version = BE7X 1.08.00.048&lt;br /&gt;
/c2 Boot Loader Version = ME7X 1.01.00.040&lt;br /&gt;
/c2 Serial Number = L018501C6481395&lt;br /&gt;
/c2 PCB Version = Rev5&lt;br /&gt;
/c2 PCHIP Version = 1.30-66&lt;br /&gt;
/c2 ACHIP Version = 3.20&lt;br /&gt;
/c2 Total Optimal Units = 1&lt;br /&gt;
/c2 Not Optimal Units = 0&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u1    RAID-1    OK             -       -       -       931.512   ON     -&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u1     931.51 GB   1953525168    WD-WMAW31148820&lt;br /&gt;
p1     OK               u1     931.51 GB   1953525168    WD-WCATR0277515&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
Sending start verify message to /c2/u1 ... Done.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
when done run:&lt;br /&gt;
tw_cli /c2 show alarms&lt;br /&gt;
&lt;br /&gt;
[root@backup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automatically starts the verify, just run &amp;lt;tt&amp;gt;tw_cli /c2 show alarms&amp;lt;/tt&amp;gt; as instructed to see the results of the verify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@backup1 /data/deprecated]# sh /root/verify.sh&lt;br /&gt;
/c0 Driver Version = 2.26.02.010&lt;br /&gt;
/c0 Model = 9650SE-8LPML&lt;br /&gt;
/c0 Available Memory = 224MB&lt;br /&gt;
/c0 Firmware Version = FE9X 4.06.00.004&lt;br /&gt;
/c0 Bios Version = BE9X 4.05.00.015&lt;br /&gt;
/c0 Boot Loader Version = BL9X 3.08.00.001&lt;br /&gt;
/c0 Serial Number = L326025A8270177&lt;br /&gt;
/c0 PCB Version = Rev 032&lt;br /&gt;
/c0 PCHIP Version = 2.00&lt;br /&gt;
/c0 ACHIP Version = 1.90&lt;br /&gt;
/c0 Number of Ports = 8&lt;br /&gt;
/c0 Number of Drives = 6&lt;br /&gt;
/c0 Number of Units = 1&lt;br /&gt;
/c0 Total Optimal Units = 1&lt;br /&gt;
/c0 Not Optimal Units = 0&lt;br /&gt;
/c0 JBOD Export Policy = off&lt;br /&gt;
/c0 Disk Spinup Policy = 1&lt;br /&gt;
/c0 Spinup Stagger Time Policy (sec) = 1&lt;br /&gt;
/c0 Auto-Carving Policy = off&lt;br /&gt;
/c0 Auto-Carving Size = 2048 GB&lt;br /&gt;
/c0 Auto-Rebuild Policy = on&lt;br /&gt;
/c0 Controller Bus Type = PCIe&lt;br /&gt;
/c0 Controller Bus Width = 1 lane&lt;br /&gt;
/c0 Controller Bus Speed = 2.5 Gbps/lane&lt;br /&gt;
&lt;br /&gt;
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    OK             -       -       64K     4656.56   ON     ON&lt;br /&gt;
&lt;br /&gt;
Port   Status           Unit   Size        Blocks        Serial&lt;br /&gt;
---------------------------------------------------------------&lt;br /&gt;
p0     OK               u0     931.51 GB   1953525168    9QJ1Y017&lt;br /&gt;
p1     DEVICE-ERROR     u0     931.51 GB   1953525168    9QJ1ZN07&lt;br /&gt;
p2     OK               u0     931.51 GB   1953525168    9QJ2XK1R&lt;br /&gt;
p3     OK               u0     931.51 GB   1953525168    9QJ2010B&lt;br /&gt;
p4     OK               u0     1.36 TB     2930277168    6XW0L36T&lt;br /&gt;
p5     OK               u0     931.51 GB   1953525168    WD-WMATV2444836&lt;br /&gt;
p6     NOT-PRESENT      -      -           -             -&lt;br /&gt;
p7     NOT-PRESENT      -      -           -             -&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ctl  Date                        Severity  Alarm Message&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
c0   [Sat May 12 11:27:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x6AE571C&lt;br /&gt;
c0   [Sat May 12 19:16:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x40E62A23&lt;br /&gt;
c0   [Sat May 12 21:40:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon May 14 00:53:53 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFA7&lt;br /&gt;
c0   [Mon May 14 00:58:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAA&lt;br /&gt;
c0   [Mon May 14 04:35:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2CF&lt;br /&gt;
c0   [Mon May 14 04:38:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x8FEF2D1&lt;br /&gt;
c0   [Tue May 15 22:53:46 2012]  WARNING   Sector repair completed: port=0, LBA=0x13C2622&lt;br /&gt;
c0   [Wed May 16 00:39:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A67F&lt;br /&gt;
c0   [Wed May 16 00:39:37 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A685&lt;br /&gt;
c0   [Wed May 16 00:47:18 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A687&lt;br /&gt;
c0   [Sat May 19 00:01:44 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 19 04:46:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x365A68E&lt;br /&gt;
c0   [Sat May 19 13:37:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAC&lt;br /&gt;
c0   [Sat May 19 13:37:28 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFAE&lt;br /&gt;
c0   [Sat May 19 13:37:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB1&lt;br /&gt;
c0   [Sat May 19 13:38:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x7B8CFB3&lt;br /&gt;
c0   [Sat May 19 21:47:45 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed May 23 12:21:41 2012]  INFO      Cache synchronization completed: unit=0&lt;br /&gt;
c0   [Fri May 25 00:08:19 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA76C&lt;br /&gt;
c0   [Fri May 25 00:08:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E4901&lt;br /&gt;
c0   [Fri May 25 00:09:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA773&lt;br /&gt;
c0   [Fri May 25 00:39:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x42C597B&lt;br /&gt;
c0   [Sat May 26 00:01:45 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat May 26 00:42:05 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AC&lt;br /&gt;
c0   [Sat May 26 00:51:43 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1AE&lt;br /&gt;
c0   [Sat May 26 01:54:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D302&lt;br /&gt;
c0   [Sat May 26 02:06:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x12DA777&lt;br /&gt;
c0   [Sat May 26 02:07:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x12E48FE&lt;br /&gt;
c0   [Sat May 26 04:20:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x2F0D306&lt;br /&gt;
c0   [Sat May 26 04:32:58 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B1&lt;br /&gt;
c0   [Sat May 26 04:33:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1B3&lt;br /&gt;
c0   [Sat May 26 04:33:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x323C1BA&lt;br /&gt;
c0   [Sat May 26 05:24:07 2012]  WARNING   Sector repair completed: port=1, LBA=0x3F83862&lt;br /&gt;
c0   [Sat May 26 05:25:09 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 06:08:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x4CDC6A2&lt;br /&gt;
c0   [Sat May 26 09:49:35 2012]  WARNING   Sector repair completed: port=1, LBA=0x6CACD4A&lt;br /&gt;
c0   [Sat May 26 18:10:44 2012]  WARNING   Sector repair completed: port=1, LBA=0x18F425EA&lt;br /&gt;
c0   [Sat May 26 19:45:40 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:22:52 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:15 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:22 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:35 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:49 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:23:57 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:24:02 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 20:54:41 2012]  WARNING   Verify fixed data/parity mismatch: unit=0&lt;br /&gt;
c0   [Sat May 26 22:00:30 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:01:43 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  2 00:30:17 2012]  WARNING   Sector repair completed: port=0, LBA=0x2B911E4&lt;br /&gt;
c0   [Sat Jun  2 00:50:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x5A807CA6&lt;br /&gt;
c0   [Sat Jun  2 04:13:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D18291&lt;br /&gt;
c0   [Sat Jun  2 04:13:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x2D1829F&lt;br /&gt;
c0   [Sat Jun  2 21:48:02 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Jun  4 04:40:34 2012]  WARNING   Sector repair completed: port=1, LBA=0x4AF8098F&lt;br /&gt;
c0   [Tue Jun  5 00:28:19 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CD&lt;br /&gt;
c0   [Tue Jun  5 00:33:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5CF&lt;br /&gt;
c0   [Thu Jun  7 00:34:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B5F&lt;br /&gt;
c0   [Thu Jun  7 00:38:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B61&lt;br /&gt;
c0   [Fri Jun  8 00:07:13 2012]  WARNING   Sector repair completed: port=0, LBA=0xC131F6B&lt;br /&gt;
c0   [Sat Jun  9 00:01:41 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun  9 00:29:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C7614D&lt;br /&gt;
c0   [Sat Jun  9 00:38:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C76152&lt;br /&gt;
c0   [Sat Jun  9 04:02:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D1&lt;br /&gt;
c0   [Sat Jun  9 04:02:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x263C5D3&lt;br /&gt;
c0   [Sat Jun  9 04:07:32 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E12&lt;br /&gt;
c0   [Sat Jun  9 04:07:57 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E15&lt;br /&gt;
c0   [Sat Jun  9 04:08:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E17&lt;br /&gt;
c0   [Sat Jun  9 04:08:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x27D3E19&lt;br /&gt;
c0   [Sat Jun  9 04:15:04 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B64&lt;br /&gt;
c0   [Sat Jun  9 04:15:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B66&lt;br /&gt;
c0   [Sat Jun  9 04:15:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B68&lt;br /&gt;
c0   [Sat Jun  9 04:15:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6C&lt;br /&gt;
c0   [Sat Jun  9 04:16:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x2A07B6E&lt;br /&gt;
c0   [Sat Jun  9 21:48:52 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Jun 14 00:40:10 2012]  WARNING   Sector repair completed: port=0, LBA=0x334F14B&lt;br /&gt;
c0   [Sat Jun 16 00:01:38 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 16 21:16:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jun 19 02:03:43 2012]  WARNING   Sector repair completed: port=1, LBA=0xFE41EAD&lt;br /&gt;
c0   [Wed Jun 20 02:30:02 2012]  WARNING   Sector repair completed: port=1, LBA=0xD99145C&lt;br /&gt;
c0   [Sat Jun 23 00:01:36 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 23 04:27:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x2FAD311&lt;br /&gt;
c0   [Sat Jun 23 06:52:38 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC8D&lt;br /&gt;
c0   [Sat Jun 23 06:53:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC91&lt;br /&gt;
c0   [Sat Jun 23 06:53:21 2012]  WARNING   Sector repair completed: port=1, LBA=0x7C6AC94&lt;br /&gt;
c0   [Sat Jun 23 17:00:22 2012]  WARNING   Sector repair completed: port=1, LBA=0xF9AC7C9&lt;br /&gt;
c0   [Sat Jun 23 21:15:19 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Jun 30 00:01:34 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jun 30 05:24:13 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FAA9E7&lt;br /&gt;
c0   [Sat Jun 30 14:49:39 2012]  WARNING   Sector repair completed: port=1, LBA=0x869931C&lt;br /&gt;
c0   [Sat Jun 30 21:31:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Jul  3 03:40:25 2012]  WARNING   Sector repair completed: port=1, LBA=0xD36C7F7&lt;br /&gt;
c0   [Fri Jul  6 02:50:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x3562470&lt;br /&gt;
c0   [Fri Jul  6 22:18:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x3563173&lt;br /&gt;
c0   [Sat Jul  7 00:01:31 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul  7 00:50:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE88&lt;br /&gt;
c0   [Sat Jul  7 00:50:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x76EE8F&lt;br /&gt;
c0   [Sat Jul  7 21:39:36 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sun Jul  8 02:51:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759D&lt;br /&gt;
c0   [Sun Jul  8 02:53:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x67759B&lt;br /&gt;
c0   [Tue Jul 10 16:17:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x15C8C695&lt;br /&gt;
c0   [Wed Jul 11 22:51:22 2012]  WARNING   Sector repair completed: port=1, LBA=0x355BBD0&lt;br /&gt;
c0   [Sat Jul 14 00:01:28 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 14 01:33:40 2012]  WARNING   Sector repair completed: port=1, LBA=0x1333BCF4&lt;br /&gt;
c0   [Sat Jul 14 03:36:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x2174773&lt;br /&gt;
c0   [Sat Jul 14 11:26:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x7429AB7&lt;br /&gt;
c0   [Sat Jul 14 16:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0xA17EB3F&lt;br /&gt;
c0   [Sat Jul 14 21:19:25 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 18 05:08:47 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EDC&lt;br /&gt;
c0   [Wed Jul 18 05:14:15 2012]  WARNING   Sector repair completed: port=1, LBA=0x17D62EE1&lt;br /&gt;
c0   [Thu Jul 19 03:24:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C3D&lt;br /&gt;
c0   [Thu Jul 19 03:25:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEA5&lt;br /&gt;
c0   [Thu Jul 19 03:28:16 2012]  WARNING   Sector repair completed: port=0, LBA=0x7733C42&lt;br /&gt;
c0   [Thu Jul 19 03:28:41 2012]  WARNING   Sector repair completed: port=0, LBA=0x773CEAF&lt;br /&gt;
c0   [Sat Jul 21 00:01:26 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 21 03:07:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6936&lt;br /&gt;
c0   [Sat Jul 21 03:07:52 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC6938&lt;br /&gt;
c0   [Sat Jul 21 03:08:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x1CC693A&lt;br /&gt;
c0   [Sat Jul 21 16:43:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xD04C914&lt;br /&gt;
c0   [Sat Jul 21 16:45:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD456973&lt;br /&gt;
c0   [Sat Jul 21 21:14:29 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Jul 25 03:37:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x1F8E6C43&lt;br /&gt;
c0   [Sat Jul 28 00:01:24 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Jul 28 01:45:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x11584AD&lt;br /&gt;
c0   [Sat Jul 28 18:54:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x447C3E6C&lt;br /&gt;
c0   [Sat Jul 28 21:13:46 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  1 03:20:11 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FEF&lt;br /&gt;
c0   [Fri Aug  3 00:50:03 2012]  WARNING   Sector repair completed: port=0, LBA=0xCED0ACA&lt;br /&gt;
c0   [Sat Aug  4 00:01:22 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug  4 00:52:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF3&lt;br /&gt;
c0   [Sat Aug  4 00:53:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF5&lt;br /&gt;
c0   [Sat Aug  4 00:53:33 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF7&lt;br /&gt;
c0   [Sat Aug  4 00:53:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FF9&lt;br /&gt;
c0   [Sat Aug  4 00:54:00 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFB&lt;br /&gt;
c0   [Sat Aug  4 00:54:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFD&lt;br /&gt;
c0   [Sat Aug  4 00:54:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x805FFF&lt;br /&gt;
c0   [Sat Aug  4 04:43:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x16974289&lt;br /&gt;
c0   [Sat Aug  4 04:58:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x1697428E&lt;br /&gt;
c0   [Sat Aug  4 20:54:53 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug  8 03:21:55 2012]  ERROR     Drive timeout detected: port=1&lt;br /&gt;
c0   [Wed Aug  8 15:31:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x1A366CD3&lt;br /&gt;
c0   [Sat Aug 11 00:01:21 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 11 20:40:51 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Aug 16 05:10:55 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C22593&lt;br /&gt;
c0   [Sat Aug 18 00:01:18 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 18 03:00:20 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C225A5&lt;br /&gt;
c0   [Sat Aug 18 03:43:00 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE91E&lt;br /&gt;
c0   [Sat Aug 18 03:43:23 2012]  WARNING   Sector repair completed: port=1, LBA=0x23EE920&lt;br /&gt;
c0   [Sat Aug 18 17:00:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066A&lt;br /&gt;
c0   [Sat Aug 18 17:00:29 2012]  WARNING   Sector repair completed: port=1, LBA=0x137D066D&lt;br /&gt;
c0   [Sat Aug 18 21:13:01 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 22 01:36:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A0F&lt;br /&gt;
c0   [Wed Aug 22 01:37:42 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A13&lt;br /&gt;
c0   [Fri Aug 24 04:01:36 2012]  WARNING   Sector repair completed: port=1, LBA=0x55C1A5DF&lt;br /&gt;
c0   [Fri Aug 24 05:02:06 2012]  WARNING   Sector repair completed: port=1, LBA=0xCE3378A&lt;br /&gt;
c0   [Sat Aug 25 00:01:17 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Aug 25 00:31:06 2012]  WARNING   Sector repair completed: port=1, LBA=0x50F65D&lt;br /&gt;
c0   [Sat Aug 25 00:39:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x678FF4&lt;br /&gt;
c0   [Sat Aug 25 03:43:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A15&lt;br /&gt;
c0   [Sat Aug 25 03:43:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A19&lt;br /&gt;
c0   [Sat Aug 25 03:43:58 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A1B&lt;br /&gt;
c0   [Sat Aug 25 03:44:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x2560A21&lt;br /&gt;
c0   [Sat Aug 25 20:58:14 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Wed Aug 29 04:57:15 2012]  WARNING   Sector repair completed: port=1, LBA=0xF3957EB&lt;br /&gt;
c0   [Sat Sep  1 00:01:15 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  1 03:21:52 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC86&lt;br /&gt;
c0   [Sat Sep  1 03:22:15 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC88&lt;br /&gt;
c0   [Sat Sep  1 03:22:34 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8A&lt;br /&gt;
c0   [Sat Sep  1 03:22:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1DAFC8C&lt;br /&gt;
c0   [Sat Sep  1 17:17:22 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD1&lt;br /&gt;
c0   [Sat Sep  1 17:17:45 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD3&lt;br /&gt;
c0   [Sat Sep  1 17:18:04 2012]  WARNING   Sector repair completed: port=0, LBA=0xF917FD5&lt;br /&gt;
c0   [Sat Sep  1 21:36:56 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Thu Sep  6 00:07:30 2012]  WARNING   Sector repair completed: port=0, LBA=0xDA3C64B&lt;br /&gt;
c0   [Thu Sep  6 00:32:56 2012]  WARNING   Sector repair completed: port=1, LBA=0x6BBA816&lt;br /&gt;
c0   [Sat Sep  8 00:01:13 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep  8 00:09:56 2012]  WARNING   Sector repair completed: port=0, LBA=0xDEBC958&lt;br /&gt;
c0   [Sat Sep  8 04:38:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x38D254F&lt;br /&gt;
c0   [Sat Sep  8 20:44:50 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Sep 10 01:26:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xFFD8D5E&lt;br /&gt;
c0   [Wed Sep 12 00:33:48 2012]  WARNING   Sector repair completed: port=1, LBA=0xE8DB928&lt;br /&gt;
c0   [Wed Sep 12 00:36:33 2012]  WARNING   Sector repair completed: port=1, LBA=0x6D49411&lt;br /&gt;
c0   [Fri Sep 14 01:59:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x1467F1C&lt;br /&gt;
c0   [Fri Sep 14 02:08:27 2012]  WARNING   Sector repair completed: port=1, LBA=0x14C8ABD&lt;br /&gt;
c0   [Fri Sep 14 03:54:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x1580C915&lt;br /&gt;
c0   [Sat Sep 15 00:01:11 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 15 02:38:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178973&lt;br /&gt;
c0   [Sat Sep 15 02:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178975&lt;br /&gt;
c0   [Sat Sep 15 04:47:08 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0356&lt;br /&gt;
c0   [Sat Sep 15 04:47:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA0359&lt;br /&gt;
c0   [Sat Sep 15 10:41:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x6DFD1EC&lt;br /&gt;
c0   [Sat Sep 15 13:25:23 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD100&lt;br /&gt;
c0   [Sat Sep 15 13:25:31 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD104&lt;br /&gt;
c0   [Sat Sep 15 13:25:54 2012]  WARNING   Sector repair completed: port=0, LBA=0x7CBD106&lt;br /&gt;
c0   [Sat Sep 15 17:10:50 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C178977&lt;br /&gt;
c0   [Sat Sep 15 20:59:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 18 01:17:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x803B05B&lt;br /&gt;
c0   [Sat Sep 22 00:01:10 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 22 20:54:31 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Sep 25 01:56:47 2012]  WARNING   Sector repair completed: port=0, LBA=0x26E3909&lt;br /&gt;
c0   [Sat Sep 29 00:01:08 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Sep 29 02:04:14 2012]  WARNING   Sector repair completed: port=0, LBA=0x146AC03&lt;br /&gt;
c0   [Sat Sep 29 10:58:39 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB0E&lt;br /&gt;
c0   [Sat Sep 29 10:59:02 2012]  WARNING   Sector repair completed: port=0, LBA=0x6D4EB14&lt;br /&gt;
c0   [Sat Sep 29 11:22:44 2012]  WARNING   Sector repair completed: port=0, LBA=0x6F79623&lt;br /&gt;
c0   [Sat Sep 29 13:50:48 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D65E&lt;br /&gt;
c0   [Sat Sep 29 13:51:11 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D661&lt;br /&gt;
c0   [Sat Sep 29 13:51:30 2012]  WARNING   Sector repair completed: port=1, LBA=0x7D1D663&lt;br /&gt;
c0   [Sat Sep 29 20:57:34 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  1 04:47:24 2012]  WARNING   Sector repair completed: port=0, LBA=0xC5BC6F2&lt;br /&gt;
c0   [Tue Oct  2 02:00:27 2012]  WARNING   Sector repair completed: port=0, LBA=0x1547667&lt;br /&gt;
c0   [Tue Oct  2 02:01:56 2012]  WARNING   Sector repair completed: port=0, LBA=0x154766F&lt;br /&gt;
c0   [Tue Oct  2 05:02:31 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D054&lt;br /&gt;
c0   [Tue Oct  2 05:04:14 2012]  WARNING   Sector repair completed: port=1, LBA=0xD67D056&lt;br /&gt;
c0   [Wed Oct  3 01:22:12 2012]  WARNING   Sector repair completed: port=1, LBA=0x12AAF8CA&lt;br /&gt;
c0   [Thu Oct  4 04:29:22 2012]  WARNING   Sector repair completed: port=0, LBA=0x13E6F992&lt;br /&gt;
c0   [Thu Oct  4 05:10:51 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252A4&lt;br /&gt;
c0   [Sat Oct  6 00:01:07 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct  6 19:41:18 2012]  WARNING   Sector repair completed: port=1, LBA=0x5A5C3AE8&lt;br /&gt;
c0   [Sat Oct  6 21:01:05 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Mon Oct  8 00:32:06 2012]  WARNING   Sector repair completed: port=0, LBA=0x6C60D3E&lt;br /&gt;
c0   [Tue Oct  9 03:51:03 2012]  WARNING   Sector repair completed: port=1, LBA=0x89B5EC9&lt;br /&gt;
c0   [Thu Oct 11 04:21:17 2012]  WARNING   Sector repair completed: port=1, LBA=0x13F85833&lt;br /&gt;
c0   [Sat Oct 13 00:01:05 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 13 05:12:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x3FA5134&lt;br /&gt;
c0   [Sat Oct 13 21:08:35 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 16 03:53:50 2012]  WARNING   Sector repair completed: port=1, LBA=0x148AA1BD&lt;br /&gt;
c0   [Thu Oct 18 03:20:30 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C8DABCB&lt;br /&gt;
c0   [Thu Oct 18 04:52:50 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879057&lt;br /&gt;
c0   [Sat Oct 20 00:01:04 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 20 02:19:25 2012]  WARNING   Sector repair completed: port=1, LBA=0x174B012&lt;br /&gt;
c0   [Sat Oct 20 03:41:38 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93B&lt;br /&gt;
c0   [Sat Oct 20 03:42:01 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D93D&lt;br /&gt;
c0   [Sat Oct 20 03:42:40 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D940&lt;br /&gt;
c0   [Sat Oct 20 03:42:59 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D942&lt;br /&gt;
c0   [Sat Oct 20 03:43:12 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D944&lt;br /&gt;
c0   [Sat Oct 20 03:43:26 2012]  WARNING   Sector repair completed: port=0, LBA=0x256D948&lt;br /&gt;
c0   [Sat Oct 20 16:37:52 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879060&lt;br /&gt;
c0   [Sat Oct 20 16:38:15 2012]  WARNING   Sector repair completed: port=0, LBA=0xE879062&lt;br /&gt;
c0   [Sat Oct 20 21:00:18 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Sat Oct 20 23:49:01 2012]  WARNING   Sector repair completed: port=1, LBA=0x4473E908&lt;br /&gt;
c0   [Sun Oct 21 03:42:26 2012]  WARNING   Sector repair completed: port=1, LBA=0x175BADD5&lt;br /&gt;
c0   [Tue Oct 23 01:09:04 2012]  WARNING   Sector repair completed: port=1, LBA=0x6E524860&lt;br /&gt;
c0   [Fri Oct 26 03:21:25 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C61&lt;br /&gt;
c0   [Fri Oct 26 04:22:21 2012]  WARNING   Sector repair completed: port=0, LBA=0x176353CD&lt;br /&gt;
c0   [Sat Oct 27 00:01:03 2012]  INFO      Verify started: unit=0&lt;br /&gt;
c0   [Sat Oct 27 00:49:35 2012]  WARNING   Sector repair completed: port=0, LBA=0x802C65&lt;br /&gt;
c0   [Sat Oct 27 17:02:24 2012]  WARNING   Sector repair completed: port=1, LBA=0xC1FF26D&lt;br /&gt;
c0   [Sat Oct 27 17:09:06 2012]  WARNING   Sector repair completed: port=0, LBA=0xDF621AD&lt;br /&gt;
c0   [Sat Oct 27 21:30:57 2012]  INFO      Verify completed: unit=0&lt;br /&gt;
c0   [Tue Oct 30 00:20:46 2012]  WARNING   Sector repair completed: port=0, LBA=0xE9FE2AB&lt;br /&gt;
c0   [Wed Oct 31 02:02:03 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C25&lt;br /&gt;
c0   [Wed Oct 31 02:04:05 2012]  WARNING   Sector repair completed: port=0, LBA=0x1460C28&lt;br /&gt;
c0   [Thu Nov  1 00:48:34 2012]  WARNING   Sector repair completed: port=1, LBA=0xA7C92BE&lt;br /&gt;
c0   [Thu Nov  1 05:04:45 2012]  WARNING   Sector repair completed: port=0, LBA=0x1C252C2&lt;br /&gt;
&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. Obviously from the above we need to probably replace drives 0 and 1 and drive 1 is even showing as having problems, yet the RAID array is healthy, amazingly. You also see the automatic verifies.&lt;br /&gt;
&lt;br /&gt;
Note: when rebuilding a degraded mirror, you will see no progress as it rebuilds in the cli&lt;br /&gt;
&amp;lt;pre&amp;gt;Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy&lt;br /&gt;
------------------------------------------------------------------------------&lt;br /&gt;
u0    RAID-5    REBUILDING     0       -       64K     4656.56   OFF    ON&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== areca ===&lt;br /&gt;
&lt;br /&gt;
We are using an areca controller on backup3. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# sh /root/verify.sh&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(19.7%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
 #  Name             Disks TotalCap  FreeCap DiskChannels       State&lt;br /&gt;
===============================================================================&lt;br /&gt;
 1  Raid Set # 00        6 6000.0GB    0.0GB 123456             Checking&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
Date-Time            Device           Event Type            Elapsed Time Errors&lt;br /&gt;
===============================================================================&lt;br /&gt;
2012-12-05 20:40:58  ARC-1160-VOL#00  Start Checking&lt;br /&gt;
2012-12-01 05:06:04  ARC-1160-VOL#00  Complete Init         027:30:45&lt;br /&gt;
2012-11-30 01:35:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 01:34:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 01:33:36  ARC-1160-VOL#00  Stop Initialization   000:31:48&lt;br /&gt;
2012-11-30 01:01:47  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-06 00:58:13  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-30 00:57:26  ARC-1160-VOL#00  Stop Initialization   000:57:07&lt;br /&gt;
2012-11-30 00:00:19  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:56:48  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:52:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:50:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 23:43:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:10:07  ARC-1160-VOL#00  Stop Initialization   000:00:56&lt;br /&gt;
2012-11-29 23:09:11  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 23:08:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 23:08:10  ARC-1160-VOL#00  Stop Initialization   000:20:41&lt;br /&gt;
2012-11-29 22:47:29  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:46:59  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:45:55  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:44:53  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:42:06  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 22:40:50  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 22:40:04  ARC-1160-VOL#00  Stop Initialization   000:24:25&lt;br /&gt;
2012-11-29 22:15:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 22:15:11  000:000001215B00 Restart Init LBA Point&lt;br /&gt;
2026-08-05 22:15:10  H/W MONITOR      Raid Powered On&lt;br /&gt;
2012-11-29 21:56:38  ARC-1160-VOL#00  Start Initialize&lt;br /&gt;
2026-08-05 21:56:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 21:56:04  IDE Channel #03  Device Inserted&lt;br /&gt;
2012-11-29 21:55:13  IDE Channel #04  Device Inserted&lt;br /&gt;
2012-11-29 21:55:03  IDE Channel #02  Device Inserted&lt;br /&gt;
2026-08-05 21:53:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:51:46  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:49:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:48:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:46:29  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:44:49  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:43:01  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:36:25  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:31:18  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:30:08  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:08:40  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:06:11  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:05:14  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:03:58  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 20:00:56  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:57:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:56:15  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 19:55:05  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:24:36  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 17:22:43  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:50:42  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:47:33  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:43:57  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:18:52  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:17:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:13:30  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:10:26  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 04:09:23  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:08:09  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:07:12  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:05:51  H/W MONITOR      Raid Powered On&lt;br /&gt;
2026-08-05 00:04:27  H/W MONITOR      Raid Powered On&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
press enter when ready to run verify&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look for failed drives and errors. &lt;br /&gt;
&lt;br /&gt;
When it proceed&#039;s to verifying, you can confirm with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@newbackup3 ~]# cli64 vsf info&lt;br /&gt;
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State&lt;br /&gt;
===============================================================================&lt;br /&gt;
  1 ARC-1160-VOL#00  Raid Set # 00   Raid5   5000.0GB 00/00/00   Checking(22.5%)&lt;br /&gt;
===============================================================================&lt;br /&gt;
GuiErrMsg&amp;lt;0x00&amp;gt;: Success.&lt;br /&gt;
[root@newbackup3 ~]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Update OS list ==&lt;br /&gt;
&lt;br /&gt;
# check for any new VZ templates we want to offer: &amp;lt;tt&amp;gt;vzup2date -z&amp;lt;/tt&amp;gt;&lt;br /&gt;
# see if there&#039;s any OS&#039;s we want to include in our colo install list. Update 2 places: &amp;lt;tt&amp;gt;signup/html/colo_quote.html&amp;lt;/tt&amp;gt; &amp;amp; &amp;lt;tt&amp;gt;signup/html/step1.html&amp;lt;/tt&amp;gt;&lt;br /&gt;
# update the mgmt database (ref_templates table, ref_systems table).&lt;br /&gt;
&lt;br /&gt;
= Infrequent tasks =&lt;br /&gt;
&lt;br /&gt;
== Free up space on gateway ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;newgateway /var/spool# cd clientmqueue/&lt;br /&gt;
newgateway /var/spool/clientmqueue# sh&lt;br /&gt;
# for f in `ls`; do rm $f; done&lt;br /&gt;
exit&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on mail ==&lt;br /&gt;
&lt;br /&gt;
You can clear out root mail:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ll -h /var/mail/root&lt;br /&gt;
-rw-------  1 root  mail    543K Dec 19 13:05 /var/mail/root&lt;br /&gt;
mail /var/log# rm /var/mail/root&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or you can archive mail logs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail /var/log# ls -l htt*&lt;br /&gt;
-rw-r--r--  1 root  wheel  297436931 Dec 19 13:26 httpd-access.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    9824324 Jul  4 11:34 httpd-access.log.old.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    6884137 Mar 17  2012 httpd-access.log.old.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   18557444 Dec  3  2009 httpd-access.log.old.10.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14740263 Jan  9  2007 httpd-access.log.old.11.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14209465 Nov 28  2007 httpd-access.log.old.12.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   16874396 Feb 19  2012 httpd-access.log.old.3.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   14554859 Jul 22  2011 httpd-access.log.old.4.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10513227 Feb 18  2011 httpd-access.log.old.5.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    7201946 Oct 29  2010 httpd-access.log.old.6.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10062537 May  6  2010 httpd-access.log.old.7.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   10157042 Aug 12  2010 httpd-access.log.old.8.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   11909534 Mar  4  2010 httpd-access.log.old.9.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel   59030930 Dec 19 13:01 httpd-error.log&lt;br /&gt;
-rw-r--r--  1 root  wheel    3413134 Mar  4  2010 httpd-error.log.0.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel     795515 May  1  2007 httpd-error.log.1.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    1142153 Nov 30  2007 httpd-error.log.2.gz&lt;br /&gt;
-rw-r--r--  1 root  wheel    2325801 Feb 18  2011 httpd-error.log.gz&lt;br /&gt;
&lt;br /&gt;
mail /var/log# sh&lt;br /&gt;
# for f in 12 11 10 9 8 7 6 5 4 3 2 1 0; do g=`echo $f+1|bc`; mv httpd-access.log.old.$f.gz httpd-access.log.old.$g.gz; done&lt;br /&gt;
# mv httpd-access.log httpd-access.log.old.0&lt;br /&gt;
# touch httpd-access.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-access.log.old.0&lt;br /&gt;
&lt;br /&gt;
# for f in 2 1 0; do g=`echo $f+1|bc`; mv httpd-error.log.$f.gz httpd-error.log.$g.gz; done&lt;br /&gt;
# mv httpd-error.log httpd-error.log.0&lt;br /&gt;
# touch httpd-error.log&lt;br /&gt;
# apachectl restart&lt;br /&gt;
# gzip httpd-error.log.0&lt;br /&gt;
# exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free up space on bwdb2 ==&lt;br /&gt;
&lt;br /&gt;
You can either remove items from &amp;lt;tt&amp;gt;/usr/home/archive&amp;lt;/tt&amp;gt; or you can scp them to backup3:/data/bwdb2/archive .&lt;br /&gt;
&lt;br /&gt;
== Free up space on backup1 ==&lt;br /&gt;
&lt;br /&gt;
backup1 is our primary customer backup system. As usage grows over time, it needs to be regularly purged of old files. The easiest way to do this is by removing deprecated files. These mostly consist of cancelled customers or temporary dump/storage files (created during dump/restores). Our standard policy is to hang onto cancelled customers for 6mos after which we remove their files (as far as customers know their data is purged immediately, but we hang onto it just in case.. and in some cases we cancel a server due to non payment so this makes it easy to restore their system). To find files to remove:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 ~]# cd /data/deprecated/&lt;br /&gt;
[root@backup1 /data/deprecated]# ls&lt;br /&gt;
2101-migrated-20120317.tgz                old-683-cxld-20121021.tgz&lt;br /&gt;
69.55.230.2-wwwbackup                     old-744-cxld-20120708.tgz&lt;br /&gt;
991-DONTDELETE.tgz                        old-809-cxld-20120609.tgz&lt;br /&gt;
archive-col02050-mdfile-cxld-20120409.gz  old-854-cxld-20120621.tgz&lt;br /&gt;
col01371.tgz                              old-931-cxld-20060513.tgz&lt;br /&gt;
deleteme_ubuntu-10.10-x86_20111205        old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
jail10_old                                old-col00147-vnfile-cxld-20120828.gz&lt;br /&gt;
jail14_rsync_old                          old-col00419-dump-cxld-20120224.gz&lt;br /&gt;
jail15_old                                old-col01098-vnfile-cxld-20120827.gz&lt;br /&gt;
jail3_old                                 old-col01278-dump-cxld-20120822&lt;br /&gt;
jail4_old                                 old-col01517-dump-cxld-20120828&lt;br /&gt;
jail5_old                                 old-col01669-dump-cxld-20120203.gz&lt;br /&gt;
old-1009-cxld-20120608.tgz                old-col01687-dump-cxld-20120909&lt;br /&gt;
old-1012-cxld-20120411.tgz                old-col01790-dump-cxld-20120828&lt;br /&gt;
old-1052-cxld-20120721.tgz                old-col01812-dump-cxld-20120820&lt;br /&gt;
old-10631-cxld-20120622.tgz               old-col01938-mdfile-cxld-20120619.gz&lt;br /&gt;
old-10632-cxld-20120622.tgz               old-col02095-mdfile-noarchive-20120523.gz&lt;br /&gt;
old-10633-cxld-20120622.tgz               olddebian-3.0-v15-20110610.tgz&lt;br /&gt;
old-1236-cxld-20120621.tgz                oldmod_frontpage-deb30-v15-20110610.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz                oldmod_perl-deb30-v15-20110610.tgz&lt;br /&gt;
old-1422-cxld-20120721.tgz                oldmod_ssl-deb30-v15-20110610.tgz&lt;br /&gt;
old-14681-cxld-20120619.tgz               oldmysql-deb30-v15-20110610.tgz&lt;br /&gt;
old-1544-cxld-20120626.tgz                oldproftpd-deb30-v15-20110610.tgz&lt;br /&gt;
old-18351-cxld-20120605.tgz               old_virt14&lt;br /&gt;
old-1853-cxld-20120910.tgz                old_virt18&lt;br /&gt;
old-1963-cxld-20120206.tgz                oldwebmin-deb30-v15-20110610.tgz&lt;br /&gt;
old-1967-cxld-20120605.tgz                suse.virt11.20120421.tgz&lt;br /&gt;
old-1981-noarchive-20120729.tgz           virt11&lt;br /&gt;
old-2030-migrated-noarchive-20120727.tgz  virt12_old&lt;br /&gt;
old-2037-cxld-20120716.tgz                virt13_old&lt;br /&gt;
old-2065-cxld-20120727.tgz                virt16_old&lt;br /&gt;
old-2068-cxld-20120424.tgz                virt4_old&lt;br /&gt;
old-2085-cxld-20120531.tgz                virt5_old&lt;br /&gt;
old-364-cxld-20120904.tgz                 virt6_old&lt;br /&gt;
old-446-cxld-20120512.tgz                 virt7_old&lt;br /&gt;
old-613-cxld-20120601.tgz                 virt8_old&lt;br /&gt;
[root@backup1 /data/deprecated]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
virtX_old and jailX_old are permanently archived, so ignore those as well as anything else marked not to delete or otherwise suspicious. Likewise, probably a good idea to try to hang onto oldTEMPLATE.gz as long as we can as well.&lt;br /&gt;
Most of the stuff we want to delete is dated when it was deprecated, making this easy. So to remove files from 6 mos ago (running this in Oct):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/deprecated]# ls old*201204*&lt;br /&gt;
old-1012-cxld-20120411.tgz  old-2068-cxld-20120424.tgz&lt;br /&gt;
old-1381-cxld-20120404.tgz  old-col00123-mdfile-noarchive-20120417.gz&lt;br /&gt;
[root@backup1 /data/deprecated]# rm old*201204*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every few months you will also want to remove some of the snapshot archives for mail.  We typically save the 1st, 10th, and 20th of each month.  To do this you set aside the dates you want to save then remove months at a time, followed by restoring the set aside dates. Here&#039;s how that works:&lt;br /&gt;
&amp;lt;pre&amp;gt;[root@backup1 /data/www/daily]# ls&lt;br /&gt;
05                     08-10-11  10-04-10  11-10-10  12-07-29  12-09-21  12-11-14&lt;br /&gt;
06                     08-10-21  10-04-20  11-10-20  12-07-30  12-09-22  12-11-15&lt;br /&gt;
06-06-01-usr-home.tgz  08-11-01  10-05-01  11-11-01  12-07-31  12-09-23  12-11-16&lt;br /&gt;
06-07-01-usr-home.tgz  08-11-10  10-05-11  11-11-10  12-08-01  12-09-24  12-11-17&lt;br /&gt;
06-08-01-usr-home.tgz  08-11-20  10-05-20  11-11-20  12-08-02  12-09-25  12-11-18&lt;br /&gt;
06-09-01-usr-home.tgz  08-12-01  10-06-01  11-12-01  12-08-03  12-09-26  12-11-19&lt;br /&gt;
06-11-10               08-12-10  10-06-10  11-12-10  12-08-04  12-09-27  12-11-20&lt;br /&gt;
06-12-21               08-12-20  10-06-20  11-12-20  12-08-05  12-09-28  12-11-21&lt;br /&gt;
07-01-10               09-01-01  10-07-01  12-01-01  12-08-06  12-09-29  12-11-22&lt;br /&gt;
07-01-20               09-01-10  10-07-10  12-01-10  12-08-07  12-09-30  12-11-23&lt;br /&gt;
07-02-10               09-01-20  10-07-20  12-01-20  12-08-08  12-10-01  12-11-24&lt;br /&gt;
07-02-20               09-02-01  10-08-01  12-02-01  12-08-09  12-10-02  12-11-25&lt;br /&gt;
07-03-01               09-02-10  10-08-10  12-02-10  12-08-10  12-10-03  12-11-26&lt;br /&gt;
07-03-20               09-02-20  10-08-20  12-02-20  12-08-11  12-10-04  12-11-27&lt;br /&gt;
07-04-01               09-03-01  10-09-01  12-03-01  12-08-12  12-10-05  12-11-28&lt;br /&gt;
07-04-10               09-03-10  10-09-10  12-03-10  12-08-13  12-10-06  12-11-29&lt;br /&gt;
07-04-20               09-03-20  10-09-20  12-03-20  12-08-14  12-10-07  12-11-30&lt;br /&gt;
07-05-01               09-04-01  10-10-01  12-04-01  12-08-15  12-10-08  12-12-01&lt;br /&gt;
07-05-10               09-04-10  10-10-10  12-04-10  12-08-16  12-10-09  12-12-02&lt;br /&gt;
07-05-20               09-04-20  10-10-20  12-04-20  12-08-17  12-10-10  12-12-03&lt;br /&gt;
07-06-01               09-05-01  10-11-01  12-05-01  12-08-18  12-10-11  12-12-04&lt;br /&gt;
07-06-10               09-05-10  10-11-10  12-05-10  12-08-19  12-10-12  12-12-05&lt;br /&gt;
07-06-20               09-05-20  10-11-20  12-05-20  12-08-20  12-10-13  12-12-06&lt;br /&gt;
07-07-20               09-06-01  10-12-01  12-06-01  12-08-21  12-10-14  12-12-07&lt;br /&gt;
07-08-10               09-06-10  10-12-10  12-06-10  12-08-22  12-10-15  12-12-08&lt;br /&gt;
07-08-20               09-06-20  10-12-20  12-06-20  12-08-23  12-10-16  12-12-09&lt;br /&gt;
07-09-01               09-07-01  11-01-01  12-07-01  12-08-24  12-10-17  12-12-10&lt;br /&gt;
07-10-01               09-07-10  11-01-10  12-07-02  12-08-25  12-10-18  12-12-11&lt;br /&gt;
07-10-10               09-07-20  11-01-21  12-07-03  12-08-26  12-10-19  12-12-12&lt;br /&gt;
07-10-20               09-08-01  11-02-01  12-07-04  12-08-27  12-10-20  12-12-13&lt;br /&gt;
07-12-01               09-08-10  11-02-10  12-07-05  12-08-28  12-10-21  12-12-14&lt;br /&gt;
07-12-10               09-08-20  11-02-20  12-07-06  12-08-29  12-10-22  12-12-15&lt;br /&gt;
08-01-01               09-09-01  11-03-01  12-07-07  12-08-30  12-10-23  12-12-16&lt;br /&gt;
08-01-20               09-09-10  11-03-10  12-07-08  12-08-31  12-10-24  12-12-17&lt;br /&gt;
08-02-20               09-09-20  11-03-20  12-07-09  12-09-01  12-10-25  12-12-18&lt;br /&gt;
08-03-01               09-10-01  11-04-01  12-07-10  12-09-02  12-10-26  12-12-19&lt;br /&gt;
08-03-10               09-10-10  11-04-10  12-07-11  12-09-03  12-10-27  12-12-20&lt;br /&gt;
08-03-20               09-10-20  11-04-20  12-07-12  12-09-04  12-10-28  12-12-21&lt;br /&gt;
08-04-01               09-11-01  11-05-01  12-07-13  12-09-05  12-10-29  12-12-22&lt;br /&gt;
08-04-20               09-11-10  11-05-10  12-07-14  12-09-06  12-10-30  12-12-23&lt;br /&gt;
08-05-01               09-11-20  11-05-20  12-07-15  12-09-07  12-10-31  12-12-24&lt;br /&gt;
08-05-10               09-12-01  11-06-01  12-07-16  12-09-08  12-11-01  12-12-25&lt;br /&gt;
08-06-10               09-12-10  11-06-10  12-07-17  12-09-09  12-11-02  12-12-26&lt;br /&gt;
08-06-20               09-12-20  11-06-20  12-07-18  12-09-10  12-11-03  12-12-27&lt;br /&gt;
08-07-02               10-01-01  11-07-01  12-07-19  12-09-11  12-11-04  12-12-28&lt;br /&gt;
08-07-10               10-01-10  11-07-10  12-07-20  12-09-12  12-11-05  2008-10-23&lt;br /&gt;
08-07-20               10-01-20  11-07-20  12-07-21  12-09-13  12-11-06  bb.tgz&lt;br /&gt;
08-08-01               10-02-01  11-08-01  12-07-22  12-09-14  12-11-07  boot&lt;br /&gt;
08-08-10               10-02-10  11-08-10  12-07-23  12-09-15  12-11-08  current&lt;br /&gt;
08-08-21               10-02-20  11-08-20  12-07-24  12-09-16  12-11-09  hold&lt;br /&gt;
08-09-01               10-03-01  11-09-01  12-07-25  12-09-17  12-11-10&lt;br /&gt;
08-09-10               10-03-10  11-09-10  12-07-26  12-09-18  12-11-11&lt;br /&gt;
08-09-21               10-03-20  11-09-20  12-07-27  12-09-19  12-11-12&lt;br /&gt;
08-10-01               10-04-01  11-10-01  12-07-28  12-09-20  12-11-13&lt;br /&gt;
[root@backup1 /data/www/daily]#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So we see that everything up to July 2012 has been pruned. To prune July 2012 we do the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;mv 12-07-01 hold&lt;br /&gt;
mv 12-07-10 hold&lt;br /&gt;
mv 12-07-20 hold&lt;br /&gt;
rm -fr 12-07*&lt;br /&gt;
mv hold/* .&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
	<entry>
		<id>https://wiki.jcihosting.com/index.php?title=NetHere&amp;diff=2199</id>
		<title>NetHere</title>
		<link rel="alternate" type="text/html" href="https://wiki.jcihosting.com/index.php?title=NetHere&amp;diff=2199"/>
		<updated>2020-02-21T20:41:53Z</updated>

		<summary type="html">&lt;p&gt;75.80.128.147: /* NetHere */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= NetHere =&lt;br /&gt;
&lt;br /&gt;
URL:  http://toolbox.nethere.com&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Access to servers is restricted to admin-1.nethere.net.  Access to admin-1.nethere.net is restricted to our office and nat.johncompanies.com.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
JCI Hosting acquired the web-hosting customers of NetHere. (Previously, NetHere had aquired customers from Simply Internet, and Z-net).&lt;br /&gt;
Customer billing/info are being imported from NetHere to JCI database &lt;br /&gt;
URL:  https://secure.johncompanies.com/mgmt/index.html&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Nethere has several tools for management.  These are restricted to only allow&lt;br /&gt;
access from the office, or a few admin&#039;s homes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  cacti [http://cacti.nethere.net]&lt;br /&gt;
  nagios[http://nagios.nethere.net]&lt;br /&gt;
  toolbox [http://toolbox.nethere.net]&lt;br /&gt;
  helpdesk [http://helpdesk.nethere.net]&lt;br /&gt;
  support FAQ [http://www.nethere.net/website/support]&lt;br /&gt;
  knowledge base [http://support.nethere.net/kb]&lt;br /&gt;
  phpmyadmin [https://phpmyadmin.nethere.net]&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
== Virtual Hosts ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
66.63.129.2 - New HP DL360&lt;br /&gt;
fpweb-2.nethere.net&lt;br /&gt;
unixweb-6.nethere.net&lt;br /&gt;
mta-db.mail.nethere.net&lt;br /&gt;
scan-1.mail.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.4- vmware9.eng&lt;br /&gt;
mysqldb-1.webhost.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.91&lt;br /&gt;
lists-1.nethere.net&lt;br /&gt;
sb-2.nethere.net&lt;br /&gt;
mx-1.nethere.net- spooling server&lt;br /&gt;
mailx-1.nethere.net- web server outbound mail server&lt;br /&gt;
&lt;br /&gt;
66.63.129.101 (c.host.nethere.net)&lt;br /&gt;
ahi.nethere.net- ldap&lt;br /&gt;
mailbox-4&lt;br /&gt;
mta-2.mail.nethere.net&lt;br /&gt;
relay-2.mail.nethere.net&lt;br /&gt;
unixweb-8.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.102  (vm2.eng.nethere.net)&lt;br /&gt;
koi.nethere.net- off- realserver&lt;br /&gt;
prov-1.nethere.net- cp.nethere.net, cp.znet.net, cp.simplyweb.net&lt;br /&gt;
scribe.nethere.net- prov.nethere.net, scribe.nethere.net&lt;br /&gt;
tetra.nethere.net- nethere.com, znet.com, simplyweb.com&lt;br /&gt;
web-2.inboxhq.net- helpdesk&lt;br /&gt;
webmail-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.103&lt;br /&gt;
andromeda.nethere.net (ns2.nethere.net)&lt;br /&gt;
nscache-2.nethere.net&lt;br /&gt;
nsrbl-2.nethere.net&lt;br /&gt;
mailbox-2.nethere.net&lt;br /&gt;
scan-2.mail.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.104&lt;br /&gt;
eel.nethere.net- ldap&lt;br /&gt;
mta-1.mail.nethere.net&lt;br /&gt;
nscache-1.nethere.net&lt;br /&gt;
nsrbl-1.nethere.net&lt;br /&gt;
phoenix.nethere.net (ns1.nethere.net)&lt;br /&gt;
relay-1.mail.nethere.net&lt;br /&gt;
unixweb-2.nethere.net&lt;br /&gt;
&lt;br /&gt;
66.63.129.105&lt;br /&gt;
home-1.nethere.net&lt;br /&gt;
ntweb-11.nethere.net&lt;br /&gt;
ntweb-6.nethere.net&lt;br /&gt;
cart32.nethere.net&lt;br /&gt;
(web-3.schedulecafe.com)&lt;br /&gt;
&lt;br /&gt;
66.63.129.106                  6499MB/16381    18.32GB/268.25 free&lt;br /&gt;
admin-1.nethere.net&lt;br /&gt;
admin-2.nethere.net&lt;br /&gt;
pike.nethere.net- cacti, nagios- off&lt;br /&gt;
shark.nethere.net- ldap&lt;br /&gt;
ntdb-1.nethere.net&lt;br /&gt;
winrestore64&lt;br /&gt;
&lt;br /&gt;
207.167.93.106- vmware6.eng    4156MB/8185MB    45.94GB/267GB free&lt;br /&gt;
ntweb-2.nethere.net&lt;br /&gt;
unixweb-12.nethere.net (no web sites on server)&lt;br /&gt;
mailbox-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
207.167.93.108- vmware8.eng&lt;br /&gt;
mailx-2.nethere.net&lt;br /&gt;
mx-2.nethere.net&lt;br /&gt;
&lt;br /&gt;
207.167.93.110- vmware10.eng&lt;br /&gt;
unixweb-11.nethere.net&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Hardware Hosts ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Cab 6-08&lt;br /&gt;
&lt;br /&gt;
siron-3 (off)&lt;br /&gt;
siron-2&lt;br /&gt;
siron-1&lt;br /&gt;
106&lt;br /&gt;
104&lt;br /&gt;
sndg-br-1&lt;br /&gt;
sndg-br-2&lt;br /&gt;
&lt;br /&gt;
mailbox-2.nethere.net  (off - virtualized)&lt;br /&gt;
mailbox-3.nethere.net  &lt;br /&gt;
mailbox-1.nethere.net  (off - virtualized)&lt;br /&gt;
&lt;br /&gt;
scan-1 (OFF)&lt;br /&gt;
&lt;br /&gt;
scan-2  &lt;br /&gt;
&lt;br /&gt;
129.2&lt;br /&gt;
102&lt;br /&gt;
101&lt;br /&gt;
&lt;br /&gt;
105&lt;br /&gt;
103&lt;br /&gt;
&lt;br /&gt;
Backup-2&lt;br /&gt;
Backup-1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Cab 5-02&lt;br /&gt;
&lt;br /&gt;
ntweb-3&lt;br /&gt;
unixweb-7&lt;br /&gt;
&lt;br /&gt;
unixweb-3&lt;br /&gt;
unixweb-2 (now virtualized)&lt;br /&gt;
unixweb-10&lt;br /&gt;
&lt;br /&gt;
ntdb-2 or 3?&lt;br /&gt;
&lt;br /&gt;
vmware6.eng&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Admin Access ==&lt;br /&gt;
&lt;br /&gt;
Access to the Admin Systems are protected by firewalls and&lt;br /&gt;
application filters based on IP addresses.&lt;br /&gt;
&lt;br /&gt;
=== Main Firewall ===&lt;br /&gt;
The Main firewall is on sndg-cr-1.  The main firewall is a Cisco 3750.&lt;br /&gt;
&lt;br /&gt;
==== IP Access List Editing ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. enable&lt;br /&gt;
&lt;br /&gt;
2. show ip access-lists access-list-name &lt;br /&gt;
&lt;br /&gt;
   show ip access-list BLOCKED&lt;br /&gt;
&lt;br /&gt;
3. configure terminal&lt;br /&gt;
&lt;br /&gt;
4. ip access-list resequence access-list-name starting-sequence-number increment&lt;br /&gt;
&lt;br /&gt;
5. ip access-list {standard | extended} access-list-name&lt;br /&gt;
    &lt;br /&gt;
   ip access-list extended BLOCKED &lt;br /&gt;
&lt;br /&gt;
6. sequence-number permit source source-wildcard&lt;br /&gt;
   sequence-number deny protocol source source-wildcard destination destination-wildcard [precedence precedence] [tos tos] [log] [time-range time-range-name] [fragments]&lt;br /&gt;
   no sequence-number &lt;br /&gt;
 &lt;br /&gt;
7. end&lt;br /&gt;
&lt;br /&gt;
8. show ip access-lists access-list-name&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Host Firewall ===&lt;br /&gt;
On the servers, there is often a firewall such as ipfw, ipf, or pf.&lt;br /&gt;
&lt;br /&gt;
==== Toolbox ====&lt;br /&gt;
&lt;br /&gt;
From root@admin-1 connect to scribe &lt;br /&gt;
  ssh scribe&lt;br /&gt;
&lt;br /&gt;
Stop the scrolling of messages to the screen&lt;br /&gt;
  /etc/rc.d/syslogd stop&lt;br /&gt;
&lt;br /&gt;
Add the new IP address to the pf config.&lt;br /&gt;
  vi /etc/pf.conf&lt;br /&gt;
&lt;br /&gt;
Reload the firewall rules&lt;br /&gt;
  pfctl -F all -f /etc/pf.conf&lt;br /&gt;
&lt;br /&gt;
Add the new IP address to the apache config.&lt;br /&gt;
  cd /nethere/conf/apache/conf/vhosts&lt;br /&gt;
  co -l prov.nethere.net  scribe.nethere.net.common&lt;br /&gt;
  vi prov.nethere.net&lt;br /&gt;
  vi scribe.nethere.net.common&lt;br /&gt;
  ci -u prov.nethere.net  scribe.nethere.net.common&lt;br /&gt;
&lt;br /&gt;
Reload the apache config&lt;br /&gt;
  apachectl restart&lt;br /&gt;
&lt;br /&gt;
Restart the syslog daemon&lt;br /&gt;
    /etc/rc.d/syslogd start&lt;br /&gt;
&lt;br /&gt;
==== Nagios ====&lt;br /&gt;
&lt;br /&gt;
From root@admin-1 connect to pike &lt;br /&gt;
  ssh nagios&lt;br /&gt;
&lt;br /&gt;
Add the new IP address to the apache config.&lt;br /&gt;
  cd /nethere/conf/apache/conf/vhosts&lt;br /&gt;
  co -l nagios.nethere.net cacti.nethere.net mrtg.nethere.net&lt;br /&gt;
  vi nagios.nethere.net cacti.nethere.net mrtg.nethere.net&lt;br /&gt;
  ci -u nagios.nethere.net cact.nethere.net mrtg.nethere.net&lt;br /&gt;
&lt;br /&gt;
Reload the apache config&lt;br /&gt;
  apachectl restart&lt;br /&gt;
&lt;br /&gt;
=== Application Filter ===&lt;br /&gt;
In the applications, there is often a restriction in the host application&lt;br /&gt;
configuration file.&lt;br /&gt;
  vi /usr/local/etc/apache/conf/vhosts/&amp;lt;host&amp;gt;.conf&lt;br /&gt;
&lt;br /&gt;
Restart the application&lt;br /&gt;
  apachect restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== NetHere/zNET/simplyweb Cert replacement ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you wish to use the old CSR, Global Sign keeps the old&lt;br /&gt;
CSR, so all you have to replace is the Cert.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;To generate a new csr for each&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -nodes -newkey rsa:2048 -keyout wild.nethere.key   -out wild.nethere.csr&lt;br /&gt;
openssl req -nodes -newkey rsa:2048 -keyout wild.simplyweb.key -out wild.simplyweb.csr&lt;br /&gt;
openssl req -nodes -newkey rsa:2048 -keyout wild.znet.key      -out wild.znet.csr&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Get a wild card cert for each company&lt;br /&gt;
(nethere.net, simplyweb.net. znet.net)&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
  https://www.globalsign.com/en/&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Update certs, keys, and/or pem files on the following servers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
webmail-1.mail (webmail for nh, si, and zn)          /nethere/conf/apache/pki/ssl.crt&lt;br /&gt;
prov-1.mail (cp for nh, si, and zn)                  /nethere/conf/apache/pki/ssl.crt&lt;br /&gt;
mta-1 and mta-2 (smtp)                       admin-2:/dist/files/nhmta/nethere/conf/postfix/nh/ssl&lt;br /&gt;
                                                     /nethere/conf/postfix/nh/ssl&lt;br /&gt;
                                                     /nethere/conf/postfix/si/ssl&lt;br /&gt;
                                                     /nethere/conf/postfix/zn/ssl&lt;br /&gt;
mailbox-1, mailbox-2, and mailbox-3 (pop3 and imap) (courier-imap)&lt;br /&gt;
                                                     /nethere/conf/courier-imap/nh&lt;br /&gt;
                                                     /nethere/conf/courier-imap/si&lt;br /&gt;
                                                     /nethere/conf/courier-imap/zn&lt;br /&gt;
              (smtp)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Provisioning ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Webservers ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OS: FreeBSD 4.11-RELEASE-p10 #23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== General ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
unixweb-1 through 8&lt;br /&gt;
  &lt;br /&gt;
All have the same configuration except for the following exceptions:&lt;br /&gt;
&lt;br /&gt;
unixweb-4: ZN Front Page server (depreciated)&lt;br /&gt;
unixweb-5: ZN Front Page, Miva Merchant server&lt;br /&gt;
unixweb-6: SI shared counter server (for SI sites): /www/lucy.inetworld.net/htdocs/cgi-bin/Count.cgi &lt;br /&gt;
&lt;br /&gt;
Server Software Installed&lt;br /&gt;
&lt;br /&gt;
Apache&lt;br /&gt;
	# httpd -v&lt;br /&gt;
	Server version: Apache/2.0.55&lt;br /&gt;
	Server built:   Apr  5 2006 17:04:01&lt;br /&gt;
PHP&lt;br /&gt;
	# /usr/local/bin/php4 -v&lt;br /&gt;
	PHP 4.4.2 (cgi-fcgi) (built: Mar  2 2006 09:31:57)&lt;br /&gt;
&lt;br /&gt;
Miva Merchant (unixweb-5)&lt;br /&gt;
MySQL&lt;br /&gt;
	# mysqladmin version -u root -p&lt;br /&gt;
	Server version          4.0.25&lt;br /&gt;
&lt;br /&gt;
Apache configurations are in the following directories:&lt;br /&gt;
&lt;br /&gt;
/nethere/conf/apache/conf/nvhosts  (name based hosting); naming convention is www.domain.tld&lt;br /&gt;
/nethere/conf/apache/conf/vhosts (ip based,SSL); naming convention is www.domain.tld (IP information) and www.domain.tld.common (general site information)&lt;br /&gt;
&lt;br /&gt;
NetHere specific scripts are located in:&lt;br /&gt;
/nethere/sbin&lt;br /&gt;
&lt;br /&gt;
Checking Server/site status via mod_status apache module:&lt;br /&gt;
&lt;br /&gt;
# apachectl stop&lt;br /&gt;
# /usr/local/etc/rc.d/httpd.sh start-status&lt;br /&gt;
&lt;br /&gt;
URL: http://unixweb-#/status (replace # with the unixeweb number)&lt;br /&gt;
** NOTE: After viewing status, do the following, otherwise other users can also view status by using .htaccess.&lt;br /&gt;
# apachectl stop&lt;br /&gt;
# apachectl startssl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Provisioning ====&lt;br /&gt;
Provisioning new sites:&lt;br /&gt;
&lt;br /&gt;
*** All website provisioning is done via NH script (webadd) on sawfish (admin-1.nethere.net) ***&lt;br /&gt;
# /nethere/sbin/webadd -h for usage&lt;br /&gt;
&lt;br /&gt;
Note: We&#039;ve discontinued new website provisioning on all servers except for unixweb-6, unless otherwise noted for domain&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== FTP Password Adjustment ====&lt;br /&gt;
&lt;br /&gt;
===== Unix Server =====&lt;br /&gt;
To change an FTP password:&lt;br /&gt;
&lt;br /&gt;
On the customers server as root do&lt;br /&gt;
&lt;br /&gt;
 passwd &amp;lt;username&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(The password prompt will not echo entry)&lt;br /&gt;
&lt;br /&gt;
===== Windows Server =====&lt;br /&gt;
&lt;br /&gt;
To change an FTP password:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Find the users login (ie aa5014) in the provisioning tool.&lt;br /&gt;
Use Remote Desktop to get into the server.&lt;br /&gt;
On Desktop find FTP application Icon.&lt;br /&gt;
Use that tool to update the user&#039;s password.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Domain Aliasing ====&lt;br /&gt;
&lt;br /&gt;
For domain aliasing:&lt;br /&gt;
&lt;br /&gt;
1) Edit the /named/named.master on phoenix (ns1.nethere.net)&lt;br /&gt;
  a) Checkout the file; open with vi&lt;br /&gt;
     # co -l named.master&lt;br /&gt;
     # vi named.master&lt;br /&gt;
  b) find the domain that&#039;s serving as the master domain, then, following the general format of the file to add the domain aliases under the master domain entry, using the master domain zone file&lt;br /&gt;
     i.e.  &lt;br /&gt;
     --cut--&lt;br /&gt;
     zone &amp;quot;domain.tld&amp;quot; { type master; file &amp;quot;master/domain.tld&amp;quot;; };&lt;br /&gt;
     zone &amp;quot;aliased_domain.tld&amp;quot; { type master; file &amp;quot;master/master_domain.tld&amp;quot;; }; &lt;br /&gt;
     --cut--&lt;br /&gt;
  c) save the file, then check-in:&lt;br /&gt;
     # ci -u named.master &amp;lt; /dev/null&lt;br /&gt;
  d) update the zone records, using the makefile in /named; check for errors&lt;br /&gt;
     # make new-zone&lt;br /&gt;
     # tail /var/log/named&lt;br /&gt;
  **Note: some domains are IP based virtually hosted (i.e. SSL certificate). This requires a modification to the standard aliasing procedure. If this is the case, do the following&lt;br /&gt;
  e) Create a new zone file called &amp;quot;domain.tld-alias&amp;quot; in /named/master, copy current domain.tld zone file to the domain.tld-alias file, and adjust the &amp;quot;website&amp;quot; records, removing whatever IP based information is there, and replacing with the appropriate $INCLUDE name-based host template.&lt;br /&gt;
     e.g.&lt;br /&gt;
     tropicalshade.net:&lt;br /&gt;
     --cut--&lt;br /&gt;
     ;; ntweb-4&lt;br /&gt;
     @               IN      A       66.63.136.4&lt;br /&gt;
     www             IN      A       66.63.136.4&lt;br /&gt;
     --cut--&lt;br /&gt;
     tropicalshade.net-alias&lt;br /&gt;
     --cut--&lt;br /&gt;
     ;; website&lt;br /&gt;
     $INCLUDE master/nvhost.ntweb-4&lt;br /&gt;
     --cut--&lt;br /&gt;
  f) Adjust the named.master zone file in /named accordingly:&lt;br /&gt;
     i.e.  &lt;br /&gt;
     --cut--&lt;br /&gt;
     zone &amp;quot;aliased_domain.tld&amp;quot; { type master; file &amp;quot;master/master_domain.tld-alias&amp;quot;; }; &lt;br /&gt;
     --cut--&lt;br /&gt;
&lt;br /&gt;
    &lt;br /&gt;
2) Edit the Apache configuration for the domain on the web server the master domain is hosted on&lt;br /&gt;
   a) Script is /nethere/sbin/webalias&lt;br /&gt;
      # /nethere/sbin/webalias -h (for usage)&lt;br /&gt;
      e.g. # /nethere/sbin/webalias -d www.originaldomain.tld -a www.domainalias.tld&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== CGI Scripting ====&lt;br /&gt;
&lt;br /&gt;
For security, we&#039;ve implemented suexec on all sites.&lt;br /&gt;
All CGI *must* be placed in the cgi-bin/ (and/or cgibin/ directory for unixweb-4, 5) directory&lt;br /&gt;
&lt;br /&gt;
CGI permissions break down as follows:&lt;br /&gt;
&lt;br /&gt;
Ownership: user=username, group=webuser&lt;br /&gt;
Permission: cgi-bin/ = 755; files = 755&lt;br /&gt;
&lt;br /&gt;
suexec errors can be found here:&lt;br /&gt;
/www/default/logs/suexec_log&lt;br /&gt;
&lt;br /&gt;
cgi errors for a domain can be found in the main apache error logs for the domain:&lt;br /&gt;
/www/www.domain.tld/logs/error_log&lt;br /&gt;
&lt;br /&gt;
More information on CGI scripting can be found here:&lt;br /&gt;
http://httpd.apache.org/docs/1.3/howto/cgi.html&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Formmail Provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Formmail is provisioned via NH script (newformmail) on server domain is hosted on.&lt;br /&gt;
# /nethere/sbin/newformmail -h for usage&lt;br /&gt;
&lt;br /&gt;
Usage: newformmail [-h] [-d domain] [-i ip_address] [-e &amp;quot;email1 email2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Disabling a site ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To disable/enable a website, there is a script (webstatus) located on each webserver:&lt;br /&gt;
/nethere/sbin/webstatus -h for usage&lt;br /&gt;
&lt;br /&gt;
1) Disable: &lt;br /&gt;
   # /nethere/sbin/webstatus -r -d -s www.domain.tld&lt;br /&gt;
&lt;br /&gt;
2) Enable:&lt;br /&gt;
   # /nethere/sbin/webstatus -r -e -s www.domain.tld&lt;br /&gt;
&lt;br /&gt;
==== Stats Provisioning ====&lt;br /&gt;
&lt;br /&gt;
We utilize webalizer for statistics.&lt;br /&gt;
&lt;br /&gt;
Stats are provisioned via NH scripts on server domain is hosted on.&lt;br /&gt;
&lt;br /&gt;
1) Provision stats for the domain (webaddstats_unix):&lt;br /&gt;
# /nethere/sbin/webaddstats_unix -d www.domain.tld&lt;br /&gt;
&lt;br /&gt;
2) Run stats for the domain (runstats_unix):&lt;br /&gt;
# /nethere/sbin/runstats_unix www.domain.tld&lt;br /&gt;
&lt;br /&gt;
More information on Webalizer can be found here:&lt;br /&gt;
http://www.mrunix.net/webalizer&lt;br /&gt;
README: ftp://ftp.mrunix.net/pub/webalizer/README&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Removing a site ====&lt;br /&gt;
&lt;br /&gt;
1) Site removal is primarily done via a NH script (webdel) on the webserver the domain is hosted on:&lt;br /&gt;
   # /nethere/sbin/webdel -h for usage&lt;br /&gt;
   # /nethere/sbin/webdel -r -d www.tldomain.com &lt;br /&gt;
&lt;br /&gt;
2) Update DNS accordingly (ns1.nethere.net)&lt;br /&gt;
   a) Checkout /named/named.master&lt;br /&gt;
      # co -l named.master&lt;br /&gt;
   b) Remove line containing domain&lt;br /&gt;
   c) Checkin /named/named.master&lt;br /&gt;
      # ci -u named.master &amp;lt; /dev/null&lt;br /&gt;
   d) Update zones&lt;br /&gt;
      # make new-zone&lt;br /&gt;
   e) Move zone files from /named/master to /named/archive&lt;br /&gt;
      # mv domain&lt;br /&gt;
&lt;br /&gt;
==== starter removal ==== &lt;br /&gt;
&lt;br /&gt;
All starter sites on home-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
1) Removal primarily done via a NH script (webdel_home) on home-1&lt;br /&gt;
   # nethere/sbin/webdel_home -h for usage&lt;br /&gt;
   i.e. webdel_home -p zn -s startername  &amp;lt;== for removing znet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Disk Quota increases ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Quotas are increased by using the &amp;quot;edquota&amp;quot; command:&lt;br /&gt;
# edquota username&lt;br /&gt;
Note that quotas are in KB, so a conversion is necessary. Also note that the &amp;quot;soft&amp;quot; quota is 1 MB less than the &amp;quot;hard&amp;quot; quota.&lt;br /&gt;
&lt;br /&gt;
The calculations are as follows:&lt;br /&gt;
&amp;quot;soft&amp;quot; =  ( quota(in MB) - 1 ) x 1024&lt;br /&gt;
&amp;quot;hard&amp;quot; =  quota(in MB) x 1024&lt;br /&gt;
e.g. for a 500 MB quota:&lt;br /&gt;
soft =&amp;gt; (500 - 1) x 1024 = 510976&lt;br /&gt;
hard =&amp;gt; 500 x 1024 = 512000&lt;br /&gt;
&lt;br /&gt;
To check a quota:&lt;br /&gt;
# quota -v username&lt;br /&gt;
&lt;br /&gt;
You can also use the prototype users:&lt;br /&gt;
&lt;br /&gt;
# edquota -p quota100 username  (100 MB)&lt;br /&gt;
# edquota -p quota500 username  (500 MB)&lt;br /&gt;
# edquota -p quota1000 username  (1000 MB)&lt;br /&gt;
&lt;br /&gt;
==== Granting of shell  - Enterprise packages ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shell is granted by the &amp;quot;chsh&amp;quot; command. By default, C shell (csh) is used:&lt;br /&gt;
# chsh -s shell_needed username&lt;br /&gt;
e.g. # chsh -s csh username&lt;br /&gt;
&lt;br /&gt;
*Note: customers must provide us with the static IP address(es) they will be connecting from. The IP address(es) must be added to the configuration on admin-1.&lt;br /&gt;
1) Modify the corresponding rules files on admin-1:&lt;br /&gt;
/dist/files/nhweb/etc/ipf.rules.fxp0 , .em0 (RCS controlled)&lt;br /&gt;
2) Push updates to servers&lt;br /&gt;
# cd /dist/rdist&lt;br /&gt;
# gmake nhweb&lt;br /&gt;
# gmake nhweb-update&lt;br /&gt;
3) Reload the rules on the corresponding server that needs access to&lt;br /&gt;
# ipf -Fa -f /etc/ipf.rules&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== SSL certificate installs ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) Whois the site for information on Registrant, email -  to be used for SSL generation&lt;br /&gt;
# whois domain.tld&lt;br /&gt;
&lt;br /&gt;
2) Create SSL self signed certificate, get CSR for customer to sign&lt;br /&gt;
  a) SSL generation is done by script: /nethere/conf/apache/pki/newsslcert.sh &lt;br /&gt;
  b) Copy CSR for customer&lt;br /&gt;
&lt;br /&gt;
3) Create/update Apache configuration files via NH script (webadd_ssl)&lt;br /&gt;
  # /nethere/sbin/webadd_ssl -h for usage&lt;br /&gt;
  a) webadd_ssl [-h] [-d domain] [-s ssl_domain] [-n]&lt;br /&gt;
  # webadd_ssl -d www.domain.tld -s www.domain.tld &lt;br /&gt;
 &lt;br /&gt;
4) Update DNS zone with new IP address - done on ns1.nethere.net&lt;br /&gt;
  a) Check out DNS zone file in /named/master&lt;br /&gt;
     # co -l domain.tld&lt;br /&gt;
  b) Adjust A records:&lt;br /&gt;
   --cut--&lt;br /&gt;
   ;; unixweb-##  &amp;lt;&amp;lt; enter the unixweb server number for ease of ID&lt;br /&gt;
   @		IN	A	vhost_ip_address&lt;br /&gt;
   www	IN	A	vhost_ip_address&lt;br /&gt;
   --cut--	&lt;br /&gt;
  c) Check in DNS zone file&lt;br /&gt;
     # ci -u domain.tld &amp;lt; /dev/null&lt;br /&gt;
  d) Reload zone file&lt;br /&gt;
     # rndc reload domain.tld&lt;br /&gt;
  e) check /var/log/named for errors&lt;br /&gt;
&lt;br /&gt;
5) Update DNS PTR record for IP address - done on ns1.nethere.net&lt;br /&gt;
  Note: Assuming IP address a.b.c.d&lt;br /&gt;
  a) Check out in-addr.arpa zone for IP address in /named/master&lt;br /&gt;
     # co -l a.b.c &lt;br /&gt;
  b) Follow format for PTR records   --cut--&lt;br /&gt;
   d	IN	PTR	www.domain.tld.&lt;br /&gt;
   --cut--&lt;br /&gt;
  c) Check in PTR zone file     # ci -u a.b.c &amp;lt; /dev/null&lt;br /&gt;
  d) Reload PTR zone &lt;br /&gt;
     # rndc reload c.b.a.in-addr.arpa&lt;br /&gt;
  e) check /var/log/named for errors&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Updating SSL certificate ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) cd to  /nethere/conf/apache/pki/ssl.crt on server site is hosted on&lt;br /&gt;
   &lt;br /&gt;
2) Check out (RCS) www.domain.tld.crt file&lt;br /&gt;
   # co -l www.domain.tld.crt&lt;br /&gt;
&lt;br /&gt;
3) Edit the file, remove old certificate, paste in new certificate&lt;br /&gt;
&lt;br /&gt;
4) Check in (RCS) the www.domain.tld.crt file&lt;br /&gt;
   # ci -u www.domain.tld.crt &amp;lt; /dev/null&lt;br /&gt;
&lt;br /&gt;
5) Check, restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl stop&lt;br /&gt;
   # apachectl startssl&lt;br /&gt;
&lt;br /&gt;
6) Verify httpd started: &lt;br /&gt;
   # ps auxw | grep httpd&lt;br /&gt;
  a) If no processes, will need to revert back to old SSL cert, and restart apache. Check logs for errors&lt;br /&gt;
     # view /www/default/logs/ssl_engine.log&lt;br /&gt;
  b) Check for &amp;quot;Unable to configure RSA server private key&amp;quot; and &amp;quot;key values mismatch&amp;quot; entries - this means a bad SSL certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Password protection ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Http (simple) password protection is governed by the Apache configuration for the domain&lt;br /&gt;
&lt;br /&gt;
1) Create userdb, users file in the domain root directory (/www/www.domain.tld):&lt;br /&gt;
   # mkdir userdb&lt;br /&gt;
   # cd userdb&lt;br /&gt;
   # htpasswd -bc users username password&lt;br /&gt;
&lt;br /&gt;
2) Check out Apache config for www.domain.tld in /nethere/conf/apache/conf/&amp;lt;vhosts,nvhosts&amp;gt;&lt;br /&gt;
   # co -l www.domain.tld(.common)  &lt;br /&gt;
&lt;br /&gt;
3) Edit Apache configuration, add the following lines within the VirtualHost container  &lt;br /&gt;
--cut--&lt;br /&gt;
&amp;lt;Directory &amp;quot;/www/www.domain.tld/dir_to_be_protected&amp;quot;&amp;gt;&lt;br /&gt;
    AuthType Basic&lt;br /&gt;
    AuthName &amp;quot;www.domain.tld/dir_to_be_protected authentication&amp;quot;&lt;br /&gt;
    AuthUserFile /www/www.domain.tld/userdb/users &lt;br /&gt;
    &amp;lt;Limit GET POST&amp;gt;&lt;br /&gt;
      require valid-user&lt;br /&gt;
    &amp;lt;/Limit&amp;gt;&lt;br /&gt;
  &amp;lt;/Directory&amp;gt;&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
4) Check in Apache config&lt;br /&gt;
   # ci -u www.domain.tld(.common) &amp;lt; /dev/null  &lt;br /&gt;
&lt;br /&gt;
5) Restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl restart&lt;br /&gt;
&lt;br /&gt;
More on http (simple) password protection can be found here:&lt;br /&gt;
http://httpd.apache.org/docs/1.3/howto/auth.html#basic&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== .htaccess ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Used if customers want control of certain Apache directives (i.e. Authentication, etc.)&lt;br /&gt;
&lt;br /&gt;
1) Check out Apache config for www.domain.tld in /nethere/conf/apache/conf/&amp;lt;vhosts,nvhosts&amp;gt;&lt;br /&gt;
   # co -l www.domain.tld(.common)  &lt;br /&gt;
&lt;br /&gt;
2) Add the AllowOverride directive in the &amp;lt;Directory&amp;gt; section, under the PHP FCGIWrapper&lt;br /&gt;
   i.e.&lt;br /&gt;
  &amp;lt;Directory &amp;quot;/www/www.domain.tld/htdocs&amp;quot;&amp;gt;&lt;br /&gt;
    FCGIWrapper /www/www.domain.tld/htdocs/cgi-bin/php4 .php&lt;br /&gt;
    AllowOverride AuthConfig FileInfo Indexes Limit&lt;br /&gt;
  &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3) Check in Apache config&lt;br /&gt;
   # ci -u www.domain.tld(.common) &amp;lt; /dev/null  &lt;br /&gt;
&lt;br /&gt;
4) Restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More info on Allow Override can be found here: http://httpd.apache.org/docs/1.3/mod/core.html#allowoverride&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Domain re-provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## NOTE: Following is for domain being provisioned on same server (i.e. just being renamed)&lt;br /&gt;
&lt;br /&gt;
1) Adjust DNS - ns1.nethere.net&lt;br /&gt;
  a) Rename the DNS zone file to the new domain, remove the old DNS zone files from /named/master and /named/master/RCS&lt;br /&gt;
  b) Check in the new DNS zone file&lt;br /&gt;
  c) Edit /named/named.master - replace the old domain with the new one, alias as necessary&lt;br /&gt;
  d) Reload DNS zones - make new-zone in /named&lt;br /&gt;
&lt;br /&gt;
2) Adjust Apache config - server domain is hosted on&lt;br /&gt;
  a) Find current config file(s) in /nethere/conf/apache/conf/&amp;lt;vhosts,nvhosts&amp;gt;&lt;br /&gt;
  b) Replace old domain name entries with new domain name, alias as necessary&lt;br /&gt;
  c) Save as new_domain.tld&lt;br /&gt;
  d) Remove old_domain.tld(.common), RCS/old_domain.tld(.common)&lt;br /&gt;
  e) Check in new_domain.tld via RCS:&lt;br /&gt;
	# ci -u new_domain.tld &amp;lt; /dev/null&lt;br /&gt;
  f) Check out Apache include configuration file via RCS /nethere/conf/apache/conf/&amp;lt;nvhosts.conf,vhosts.conf&amp;gt; &lt;br /&gt;
  g) Edit nvhosts.conf or vhosts.conf, replace old_domain.tld entries with new_domain.tld&lt;br /&gt;
  h) Check in via RCS /nethere/conf/apche/conf/&amp;lt;nvhosts.conf,vhosts.conf&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
3) Rename directory for new domain&lt;br /&gt;
   # cd /www&lt;br /&gt;
   # mv www.old_domain.tld www.new_domain.tld&lt;br /&gt;
&lt;br /&gt;
4) Adjust PHP stub files/configuration&lt;br /&gt;
  a) Adjust php.ini file in /www/www.domain.tld/(php4,php5)&lt;br /&gt;
  b) Adjust PHP stub files in /www/www.domain.tld/htdocs/cgi-bin/(php4,php5) &lt;br /&gt;
  *note: need to chflags to &amp;quot;noschg&amp;quot; for /www/www.domain.tld/htdocs/cgi-bin/(php4,php5) before being able to update the stub files&lt;br /&gt;
 	 after adjusting, be sure to chflags schg /www/www.domain.tld/htdocs/cgi-bin/(php4,php5)&lt;br /&gt;
&lt;br /&gt;
5) Edit password file (vipw), replace old_domain.tld entries with new_domain.tld&lt;br /&gt;
   # vipw&lt;br /&gt;
   &lt;br /&gt;
6) Restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl restart&lt;br /&gt;
&lt;br /&gt;
## NOTE: Following is for domain being re-provisioned on new server&lt;br /&gt;
&lt;br /&gt;
1) Run NH script &amp;quot;webadd&amp;quot; on sawfish to provision domain on new server, *DO NOT* reload DNS, use same user/pass as before.&lt;br /&gt;
2) After customer has uploaded site to new server and gives the ok do the following:&lt;br /&gt;
  a) Update DNS zone for domain.tld; reload zone for domain.tld&lt;br /&gt;
  b) Wait 48 hours, then remove the site off the old server&lt;br /&gt;
     # /nethere/sbin/webdel&lt;br /&gt;
&lt;br /&gt;
==== FTP space provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generally FTP sites are provisioned on the server that hosts the main website. In the case of NT based FTP sites, we usually provision them on the server with the most space available.&lt;br /&gt;
&lt;br /&gt;
1) Provision site on server&lt;br /&gt;
  a) Done via NH script (webadd_ftp): webadd_ftp [-h] [-d domain] [-u username] [-p password] [-n]&lt;br /&gt;
     # /nethere/sbin/webadd_ftp -h  &amp;lt;- for usage&lt;br /&gt;
     EX: for ftp.domain.tld:&lt;br /&gt;
     # /nethere/sbin/webadd_ftp -d ftp.domain.tld -u username -p password&lt;br /&gt;
     (leaving off the -n will restart the proftpd process)&lt;br /&gt;
  b) Note: The host IP address will be given when the provisioning is completed, use that ip (a.b.c.d) for DNS entries&lt;br /&gt;
  c) Note: If this is an existing customer on the server, you&#039;ll need to increase the quota manually by 100 MB for the customer, see the section on quota increases for more info.&lt;br /&gt;
&lt;br /&gt;
2) Adjust DNS for domain on ns1.nethere.net&lt;br /&gt;
  a) Checkout zone for domain&lt;br /&gt;
  b) Add ftp host entry for domain, adjust serial&lt;br /&gt;
     EX for domain.tld:&lt;br /&gt;
--cut--&lt;br /&gt;
ftp	IN	A	a.b.c.d&lt;br /&gt;
--cut--&lt;br /&gt;
  c) Check in zone for domain&lt;br /&gt;
  d) Reload zone&lt;br /&gt;
     # rndc reload domain.tld&lt;br /&gt;
3) Adjust PTR record for domain, adjust serial&lt;br /&gt;
  a) Checkout zone for a.b.c&lt;br /&gt;
  b) Add record for domnain:&lt;br /&gt;
--cut--&lt;br /&gt;
d	IN	PTR 	ftp.domain.tld.&lt;br /&gt;
--cut--&lt;br /&gt;
  c) Check in zone for a.b.c&lt;br /&gt;
  d) Reload zone&lt;br /&gt;
     # rndc reload c.b.a.in-addr.arpa&lt;br /&gt;
4) Check for DNS errors&lt;br /&gt;
  a) tail /var/log/namedb&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== SiteBuilder provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
unixweb-7.nethere.net&lt;br /&gt;
&lt;br /&gt;
*Must re-provision site on unixweb-7.nethere.net (if not already done)&lt;br /&gt;
&lt;br /&gt;
1) Log in to SB admin:&lt;br /&gt;
   http://sitebuilder.nethere.net/admin&lt;br /&gt;
  a) username: root&lt;br /&gt;
&lt;br /&gt;
2) Add site to SB config&lt;br /&gt;
  a) Click on Site Management -&amp;gt; Add regular&lt;br /&gt;
    * Alias is website username: i.e. aa####&lt;br /&gt;
    * Check the &amp;quot;Active&amp;quot; box  &lt;br /&gt;
    * Plan is &amp;quot;BasePlan&amp;quot;  &lt;br /&gt;
    * Password same as website&lt;br /&gt;
  b) Click on &amp;quot;Publish Properties&lt;br /&gt;
    * Check &amp;quot;Allow publishing&amp;quot;&lt;br /&gt;
    * Site host name: www.domain.tld&lt;br /&gt;
    * FTP host: unixweb-7.nethere.net&lt;br /&gt;
    * FTP login/password: same as site user/pass&lt;br /&gt;
    * FTP working directory: leave blank&lt;br /&gt;
  c) Click Apply&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====  Name Servers            ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
OS: FreeBSD&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== General ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ns1.nethere.net - Primary name servers for DNS zone records&lt;br /&gt;
 /named - contains the files that have all DNS domain zone entries (named.master, named.slave, named.acl) plus Makefile for distributing DNS records&lt;br /&gt;
 /named/master - contains all the domain zone files for which we are authorative for, as well as IP address (PTR records)&lt;br /&gt;
&lt;br /&gt;
ns2.nethere.net - Secondary (slave) name server for DNS zone records&lt;br /&gt;
&lt;br /&gt;
nsrbl-1.nethere.net - RBL (Realtime Blackhole List) DNS server&lt;br /&gt;
 /named/rbldns/cache - contains the files for domains that we specifically allow or deny&lt;br /&gt;
&lt;br /&gt;
nscache-1,2 - caching name servers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Adding DNS website entries ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) Create a DNS zone file for domain.tld via NH script (zoneadd_vhost)&lt;br /&gt;
   # /nethere/sbin/zoneadd_vhost -h  (for usage)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Adding DNS IP entries ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) Create the forward and reverse DNS records via NH script (zone_generate) for a netblock, will create /tmp/customer.forward and /tmp/customer.reverse files to be read&lt;br /&gt;
  a) /nethere/sbin/zone_generate -h for usage&lt;br /&gt;
    i.e. for netblock a.b.c, starting IP d, ending IP z&lt;br /&gt;
     # zone_generate -n a.b.c -b d -e z -p customer&lt;br /&gt;
&lt;br /&gt;
2) Checkout, edit the IP in-addr.arpa zone file, reload the zone&lt;br /&gt;
  a) # co -l a.b.c&lt;br /&gt;
  b) Seach for the nearest netblock area for the domain, follow format for customer info, read in the /tmp/abbrev.rdns file accordingly, increase Serial for zone in YYYYMMDD## format&lt;br /&gt;
     e.g.&lt;br /&gt;
--cut--&lt;br /&gt;
;;;;&lt;br /&gt;
;; 66.63.152.232/30 (255.255.255.252)&lt;br /&gt;
;; Description: First Choice Home Improvement&lt;br /&gt;
;; Contact: Shannon Hill &amp;lt;firstchoicehi@hotmail.com&amp;gt;, (858) 277-5351&lt;br /&gt;
;; Location: AR-1, Serial3/0/18:0&lt;br /&gt;
;;;;&lt;br /&gt;
232     IN      PTR     firstchoice-net.access.nethere.net.&lt;br /&gt;
233     IN      PTR     firstchoice-gw.access.nethere.net.&lt;br /&gt;
234     IN      PTR     firstchoice-2.access.nethere.net.&lt;br /&gt;
235     IN      PTR     firstchoice-bcast.access.nethere.net.&lt;br /&gt;
--cut--&lt;br /&gt;
  c) # ci -u a.b.c &amp;lt; /dev/null&lt;br /&gt;
  d) # rndc reload c.b.a.in-addr.arpa&lt;br /&gt;
  e) verify loading of zone: # tail /var/log/named&lt;br /&gt;
3) Checkout, edit the forward DNS zone file, reload the zone&lt;br /&gt;
  a) # co -l access.nethere.net&lt;br /&gt;
  b) Search for the nearest neblock area for the domain, follow format for customer, read in the /tmp/abbrev.fdns file accordingly, increase Serial for zone in YYYYMMDD## format&lt;br /&gt;
     e.g.&lt;br /&gt;
--cut--&lt;br /&gt;
;; 66.63.152.232/30 (255.255.255.252)&lt;br /&gt;
firstchoice-net 	IN      A       66.63.152.232&lt;br /&gt;
firstchoice-gw  	IN      A       66.63.152.233&lt;br /&gt;
firstchoice-2   	IN      A       66.63.152.234&lt;br /&gt;
firstchoice-bcast IN      A       66.63.152.235&lt;br /&gt;
--cut--&lt;br /&gt;
  c) # ci -u access.nethere.net &amp;lt; /dev/null&lt;br /&gt;
  d) # rndc reload access.nethere.net &lt;br /&gt;
  e) verify loading of zone: # tail /var/log/named&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Unblocking RBLd IP addresses ====&lt;br /&gt;
&lt;br /&gt;
There are two primary reasons why we add customers to the allow relay list:&lt;br /&gt;
a) They have a static IP address (i.e. DSL, T1) and wish to use our mail servers&lt;br /&gt;
b) They&#039;ve been blocked by one of our subscribed blackhole lists, however, have patched their machine and are no longer open to relay.&lt;br /&gt;
&lt;br /&gt;
To do this on nsrbl-1.nethere.net:&lt;br /&gt;
&lt;br /&gt;
1) Checkout the allow.relays.nethere.net file located in /named/rbldns/cache&lt;br /&gt;
&lt;br /&gt;
2) Edit the file, and add the IP address in the following format:&lt;br /&gt;
--cut--&lt;br /&gt;
a.b.c.d	YYYYMMDD	hostname reason for listing&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
You can also add subnets via &#039;/&#039; notation for relay&lt;br /&gt;
--cut--&lt;br /&gt;
a.b.c.d/28	YYYYMMDD	hostname reason for listing&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
3) Check in the file&lt;br /&gt;
&lt;br /&gt;
4) Updates to the rbldns zone are done automatically on the hour, every hour, so no need to do anything else. &lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
&lt;br /&gt;
For &amp;quot;permanent&amp;quot; (i.e. customer static IP addresses), add the IP address in the &amp;quot;## permanent allowed relay (i.e. customer w/ static IP)&amp;quot; section&lt;br /&gt;
For &amp;quot;temporary&amp;quot; (i.e. blackhole listed IP addresses), add the IP address in the &amp;quot;## temporary&amp;quot; section&lt;br /&gt;
&lt;br /&gt;
Also, for temporary IP addresses, we need to send the note to the requester detailing the following:&lt;br /&gt;
a) If the IP address relays Spam/UCE/Viruses through us, it is to be removed permanently.&lt;br /&gt;
b) The customer must follow the steps listed on the blackhole list the IP was listed on to get removed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Flushing DNS cache for a domain ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Due usually to a bad zone or excessively long TTL for a domain, the cache for it will need to be flushed.&lt;br /&gt;
&lt;br /&gt;
On nscache-1:&lt;br /&gt;
# /usr/local/sbin/rndc flushname domain.tld&lt;br /&gt;
&lt;br /&gt;
If cache is still corrupted (i.e. zone lookups either fail or are incorrect), need to stop and restart the caching server:&lt;br /&gt;
&lt;br /&gt;
# /etc/init.d/local.named stop&lt;br /&gt;
# /etc/init.d/local.named start&lt;br /&gt;
&lt;br /&gt;
==== SiteBuilder ====&lt;br /&gt;
http://sitebuilder.nethere.net/&lt;br /&gt;
-URL used to test sitebuilder&lt;br /&gt;
&lt;br /&gt;
http://sitebuilder.nethere.net/Login&lt;br /&gt;
-URL for control panel.  &lt;br /&gt;
&amp;lt;p&amp;gt;This is where the customer also logs in to manage&lt;br /&gt;
their web site and also where you log in to administer sitebuilder.&lt;br /&gt;
Sitebuilder is hosted on sb-2.nethere.net.  The site is designed/built on&lt;br /&gt;
this server and published to unixweb-7.nethere.net.  This is the only server&lt;br /&gt;
that can host a sitebuilder web site.  The admin log in to manage&lt;br /&gt;
sitebuilder is:&lt;br /&gt;
&lt;br /&gt;
Username:  admin&lt;br /&gt;
Password:  N3tH3r31!&lt;br /&gt;
&lt;br /&gt;
==== Cart32 ====&lt;br /&gt;
cart32 information:&lt;br /&gt;
&lt;br /&gt;
Cart32 is hosted on ntweb-6.nethere.net.  That is where all of the&lt;br /&gt;
configuration files are located.  The location of the ini file is&lt;br /&gt;
D:\websites\Cart32cgi/cart32.ini.  This is where ip restrictions to admin&lt;br /&gt;
panel are set, password can be reset for admin, time limit restriction&lt;br /&gt;
reset, etc.&lt;br /&gt;
&lt;br /&gt;
There are four customers that still use cart32:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.cart.simplyweb.net/lab400/cart/c32web.exe&lt;br /&gt;
https://www.cart.simplyweb.net/nutragenics/cart/c32web.exe&lt;br /&gt;
https://www.cart.simplyweb.net/retrogen/cart/c32web.exe&lt;br /&gt;
https://www.cart.simplyweb.net/stonesculptorssupplies/cart/c32web.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The client codes are lab400, nutragenics, retrogen, and&lt;br /&gt;
stonesculptorssupplies.  You can reset their passwords through the admin&lt;br /&gt;
panel.&lt;br /&gt;
&lt;br /&gt;
To administer cart32:&lt;br /&gt;
&lt;br /&gt;
Control Panel:  https://www.cart.simplyweb.net/cart/c32web.exe/Admin&lt;br /&gt;
Username:  administrator&lt;br /&gt;
Password:  N3tH3r31!&lt;br /&gt;
Cart Admin Password:  N3tH3r31!&lt;br /&gt;
&lt;br /&gt;
===  Webmail  ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Removing webmail filters ====&lt;br /&gt;
&lt;br /&gt;
All webmail filters are stored in the MySQL database on webmail-1.&lt;br /&gt;
&lt;br /&gt;
Simple SQL commands:&lt;br /&gt;
&lt;br /&gt;
* Choosing a DB&lt;br /&gt;
mysql&amp;gt; use db_name;&lt;br /&gt;
&lt;br /&gt;
* Showing tables in a DB (must be using a DB)&lt;br /&gt;
mysql&amp;gt; show tables;&lt;br /&gt;
&lt;br /&gt;
* Describing a table&#039;s fields&lt;br /&gt;
mysql&amp;gt; describe table_name;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For NetHere, the database is horde_nh.&lt;br /&gt;
For Simply, the database is horde_si.&lt;br /&gt;
For zNET, the database is horde_zn.&lt;br /&gt;
&lt;br /&gt;
1) Log to MySQL as root, using the normal root password&lt;br /&gt;
  a) # mysql -u root -p&lt;br /&gt;
2) Select the appropriate database&lt;br /&gt;
  a) mysql&amp;gt; use horde_nh&lt;br /&gt;
3) Delete webmail filters.&lt;br /&gt;
  a) mysql&amp;gt; delete from horde_prefs where pref_uid=&#039;&amp;lt;username&amp;gt;&#039; and pref_name=&#039;filters&#039;;&lt;br /&gt;
     Replace &amp;lt;username&amp;gt; with the users login.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Removing Address book entries ====&lt;br /&gt;
&lt;br /&gt;
1) Log in&lt;br /&gt;
2) Select appropriate db&lt;br /&gt;
3) Find object_id, delete&lt;br /&gt;
   a) mysql&amp;gt; select object_id from turba_objects where object_email=&#039;email@domain.tld&#039;;&lt;br /&gt;
   b) mysql&amp;gt; delete from turba_objects where object_id=&#039;object_id_obtained_previously&#039;;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Repairing webmail address book ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) Export (via webmail, if possible) the address book to a csv file.&lt;br /&gt;
&lt;br /&gt;
2) Delete the Turba objects in the mySQL database in the turba_objects table that is assigned to the customer.&lt;br /&gt;
  mysql&amp;gt; delete from turba_objects where owner_id=&#039;username&#039;;&lt;br /&gt;
&lt;br /&gt;
3) Import the saved address book via webmail.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Changing webmail From Information ====&lt;br /&gt;
&lt;br /&gt;
1) Login as the customer (http://webmail.nethere.net)&lt;br /&gt;
&lt;br /&gt;
2) Click on Options&lt;br /&gt;
&lt;br /&gt;
3) Click on Personal Information&lt;br /&gt;
&lt;br /&gt;
4) Select either default identity or a new one and click on Edit Your Identities&lt;br /&gt;
&lt;br /&gt;
=== MySQL     ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All unixweb-## boxes have MySQL processes running on them. DB provisioning is done on the same server as which hosts the domain.&lt;br /&gt;
For Windows 2000 MySQL provisioning, all DBs are placed on ntdb-2.nethere.net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== MySQL DB provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use https://phpmyadmin.nethere.net &lt;br /&gt;
User: root&lt;br /&gt;
PW: &amp;lt;root db pw&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1) Select the server that the DB needs to be hosted on, typically the same server that the website is hosted on.&lt;br /&gt;
2) Create the DB&lt;br /&gt;
  a) Under &amp;quot;MySQL&amp;quot; - &amp;quot;Create new database&amp;quot; - the DB name is typically the website username; click &amp;quot;Create&amp;quot;&lt;br /&gt;
3) Create the User, set permissions for user on DB&lt;br /&gt;
  a) Click &amp;quot;Home&amp;quot; (upper left), then click &amp;quot;Privileges&amp;quot; (Under &amp;quot;MySQL&amp;quot;)&lt;br /&gt;
  b) Click &amp;quot;Add a new User&amp;quot; make sure &amp;quot;Any host&amp;quot; selected, &amp;quot;User name&amp;quot; is typically the website username, &amp;quot;Password&amp;quot; is typically the website password. !LEAVE &amp;quot;Global Privileges&amp;quot; settings deselected! Click &amp;quot;Go&amp;quot; (bottom right corner) when finished.&lt;br /&gt;
  c) Under &amp;quot;Database-specific privileges&amp;quot; be sure to add the user database&lt;br /&gt;
  d) Select DB, then for &amp;quot;Privileges:&amp;quot; make sure the following are checked:&lt;br /&gt;
     Select, Insert, Update, Delete, Create, Alter, Index, Drop, Create Temporary Tables, Lock Tables, References&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
=== List Server    ===&lt;br /&gt;
&lt;br /&gt;
lists-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Mail List provisioning ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Go to the /mailman/bin directory:&lt;br /&gt;
	# cd /mailman/bin&lt;br /&gt;
&lt;br /&gt;
2. Create the list on lists-1:&lt;br /&gt;
	# ./newlist -q \&lt;br /&gt;
		listname@lists.domain.com \&lt;br /&gt;
		mailman-owner@lists.domain.com \&lt;br /&gt;
		password &lt;br /&gt;
&lt;br /&gt;
3. Configure the list using default settings:&lt;br /&gt;
	# ./config_list -i /mailman/data/defaultlist.cfg listname&lt;br /&gt;
&lt;br /&gt;
*** IF A NEW DOMAIN FOLLOW INSTRUCTIONS BELOW ***&lt;br /&gt;
&lt;br /&gt;
NOTE: All files in /etc/mail are controlled by RCS&lt;br /&gt;
&lt;br /&gt;
4. Go to the /etc/mail directory:&lt;br /&gt;
	# cd /etc/mail&lt;br /&gt;
&lt;br /&gt;
5. Add the list domain to the mailertable file:&lt;br /&gt;
	lists.domain.com	mailman:lists.domain.com&lt;br /&gt;
&lt;br /&gt;
6. Add the list domain to the relay-domains file.&lt;br /&gt;
&lt;br /&gt;
7. Add the list domain to the virtuserdomain file.&lt;br /&gt;
&lt;br /&gt;
8. Add the following entries to the virtusertable file:&lt;br /&gt;
	mailman@lists.domain.com	&amp;lt;customer_email_address&amp;gt;&lt;br /&gt;
	mailman-owner@lists.domain.com	mailman@lists.domain.com&lt;br /&gt;
&lt;br /&gt;
9. Run make to rebuild all files:&lt;br /&gt;
	# make&lt;br /&gt;
&lt;br /&gt;
10. Run make to restart sendmail:&lt;br /&gt;
	# make restart&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Backup Server    ===&lt;br /&gt;
&lt;br /&gt;
backup-1 -&amp;gt; odd numberd unix&lt;br /&gt;
backup-2 -&amp;gt; even numbered unix&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On Saturdays, we do a level 0 dump&lt;br /&gt;
the rest are differential&lt;br /&gt;
&lt;br /&gt;
Restoration fees are basically $25 per day restored for email, $150 for website from cancelled archive, $25 a day for website &lt;br /&gt;
&lt;br /&gt;
Restoration is based on customer request. In order to restore a site/mailbox fully, you must start restoration from the previous level 0 backup and continue until the day after the day requested, since backups are performed in the morning.&lt;br /&gt;
&lt;br /&gt;
General rule is to leave the gzipped file available for 24 hours, after that remove it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Site Restoral ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the site was removed by the web_del script, then so long as it is within 30 days since removal, the gzipped site will be available within /www/archive/www.domain.tld-date_removed.tar.gz&lt;br /&gt;
Extract with tar:&lt;br /&gt;
# tar -zxvpf www.domain.tld-date_removed.tar.gz&lt;br /&gt;
&lt;br /&gt;
If the site needs to be restored from backup, here are the steps&lt;br /&gt;
&lt;br /&gt;
1) To determine where backups are stored, go to /nethere/conf/backup&lt;br /&gt;
&lt;br /&gt;
2) grep for the server within the directory to determine the backup directory used&lt;br /&gt;
i.e. &lt;br /&gt;
# grep unixweb-1 *&lt;br /&gt;
&lt;br /&gt;
4) Go to /backup1/unix/unixweb-1.nethere.net&lt;br /&gt;
  a) you&#039;ll see a bunch of directories, named &amp;lt;date&amp;gt;-&amp;lt;dump_level&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5) Since the backups happen in the morning, typically you&#039;ll need to go to the day before, restore&lt;br /&gt;
   e.g. for 11/10&lt;br /&gt;
  a) cd 20041109-3&lt;br /&gt;
  b) restore -if and whatever the filesystem is named&lt;br /&gt;
     (for interactive: add files/dirs, extract, then use 1 for volume #)&lt;br /&gt;
     i.e.&lt;br /&gt;
     restore&amp;gt; add &amp;lt;path_to_dir/files&amp;gt;&lt;br /&gt;
     restore&amp;gt; extract&lt;br /&gt;
     (for volume #: 1)&lt;br /&gt;
     (Set owner: y)&lt;br /&gt;
&lt;br /&gt;
6) After you&#039;ve restored the files, just use scp as root on sawfish to copy them off of backup-# and then onto unixweb-#&lt;br /&gt;
  a) (on sawfish - two step process)&lt;br /&gt;
     # scp backup-1:/tmp/&amp;lt;restored_file&amp;gt; /tmp&lt;br /&gt;
     # scp /tmp/&amp;lt;restored_files&amp;gt; unixweb-1:/tmp&lt;br /&gt;
&lt;br /&gt;
7) Cleanup any restored files off of backup-# server and sawfish&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the newly restored files, following are the steps necessary to restore a site:&lt;br /&gt;
&lt;br /&gt;
1) Restore www.domain.tld directory to /www&lt;br /&gt;
   # mv /www/archive/www.domain.tld /www&lt;br /&gt;
&lt;br /&gt;
2) Restore configuration files&lt;br /&gt;
  a) Apache: mv /www/archive/nethere/conf/apache/(n)vhosts/www.domain.tld /nethere/conf/apache/conf/(n)vhosts/&lt;br /&gt;
  b) Webalizer: mv /www/archive/nethere/conf/webalizer/unix/www.domain.tld /nethere/conf/webalizer/unix&lt;br /&gt;
&lt;br /&gt;
3) Add user to /etc/master.passwd:&lt;br /&gt;
  a) vipw&lt;br /&gt;
     (go to end of file, read in the master.passwd file from /www/www.domain.tld)&lt;br /&gt;
     :r /www/www.domain.tld/master.passwd&lt;br /&gt;
  b) Remove /www/www.domain.tld/master.passwd&lt;br /&gt;
&lt;br /&gt;
4) Change flags on cgi-bin/ directories:&lt;br /&gt;
   # chflags sunlnk /www/www.domain.tld/htdocs/cgi-bin/&lt;br /&gt;
   # chflags schg /www/www.domain.tld/htdocs/cgi-bin/php*&lt;br /&gt;
&lt;br /&gt;
5) Add configuration file entry to nvhosts/vhosts.conf&lt;br /&gt;
   # co -l (n)vhosts.conf&lt;br /&gt;
   # vi (n)vhosts.conf&lt;br /&gt;
   # ci -u (n)vhosts.conf&lt;br /&gt;
&lt;br /&gt;
6) Check apache, restart; verify apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl graceful&lt;br /&gt;
   # &lt;br /&gt;
&lt;br /&gt;
7) Cleanup remaining restored files&lt;br /&gt;
   # rm -r /www/archive/www&lt;br /&gt;
   # rm -r /www/archive/nethere&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8) Enter zone into DNS on ns1.nethere.net&lt;br /&gt;
  a) zone file&lt;br /&gt;
     # mv /named/archive/domain.tld /named/master&lt;br /&gt;
     # mv /named/archive/domain.tld,v /named/master/RCS&lt;br /&gt;
  b) named.master file &lt;br /&gt;
     # co -l /named/named.master&lt;br /&gt;
     (add zone)&lt;br /&gt;
     # ci -u /named/named.master&lt;br /&gt;
     # cd /named; make new-zone&lt;br /&gt;
     # tail /var/log/named   (look for errors)&lt;br /&gt;
*************************************************&lt;br /&gt;
(12:40:47 PM) Henry Chan: restore is now available that is compatible with the 4.4bsd format&lt;br /&gt;
(12:41:04 PM) Henry Chan: to restore, use the following command: restore -c -i -f path_to_archive&lt;br /&gt;
(12:41:08 PM) Henry Chan: the &amp;quot;-c&amp;quot; is what does it&lt;br /&gt;
(12:41:29 PM) Henry Chan: (only applies to backup-1-new... doesn&#039;t work on backup-2 or backup-4)&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==== Email Restoral ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Storage path prefixes:&lt;br /&gt;
	/nfs/1 is sndg-netapp-1 (on backup-4 - /backup/hosts/2/e0.sndg-netapp-1.nethere.net)&lt;br /&gt;
	/nfs/2 is sndg-netapp-2 (on backup-3 - /backup/hosts/2/e0.sndg-netapp-2.nethere.net)&lt;br /&gt;
	/nfs/3 is sndg-netapp-3 (on backup-2 - /backup/hosts/2/e0.sndg-netapp-3.nethere.net)&lt;br /&gt;
        /nfs/4 is sndg-netapp-1 (on backup-2 - /backup/hosts/2/e0.sndg-netapp-1.nethere.net)&lt;br /&gt;
        /nfs/5 is sndg-netapp-2 (on backup-1 - /backup/hosts/2/e0.sndg-netapp-2.nethere.net)&lt;br /&gt;
        /nfs/6 is sndg-netapp-3 (on backup-2 - /backup/hosts/2/e0.sndg-netapp-3.nethere.net)&lt;br /&gt;
&lt;br /&gt;
Looking up a mailbox&#039;s Storage Path:&lt;br /&gt;
	You will need to know which directory and NFS server the customer&#039;s mail is &lt;br /&gt;
stored on. Use the Provisioning Tool to get the Storage Path (find the customer&#039;s mail&lt;br /&gt;
account, then click on the Engineering sub-tab under the Email tab). You should end up&lt;br /&gt;
with something like &amp;quot;/nfs/3/nh/h/t/htchan/Maildir&amp;quot;. Match the prefix of the storage&lt;br /&gt;
directory with an NFS server (see prefixes above).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1) If restoring a recently deleted mail account:&lt;br /&gt;
	a) Look for the archive on mailbox-1 in /nfs/archive/mail/{platform}/{username}.{date}-{PID}.tar.gz. &lt;br /&gt;
           If it doesn&#039;t exist, it&#039;s been too long and the only way to get email back is to restore from backup.&lt;br /&gt;
	b) Make sure the account is re-created in Provisioning Tool and look up the storage path &lt;br /&gt;
           (see &amp;quot;Looking up a mailbox&#039;s Storage Path&amp;quot; above).&lt;br /&gt;
	c) Extract the archive to a temporary directory:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /tmp; tar xzvpf /nfs/archive/mail/nh/zella.20090902-7508.tar.gz&lt;br /&gt;
		Sample output:&lt;br /&gt;
			nfs/2/nh/z/e/zella/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/tmp/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/new/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/cur/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/maildirsize&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/tmp/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/new/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/cur/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Trash/maildirfolder&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/tmp/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/new/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/cur/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Drafts/maildirfolder&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/tmp/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/new/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/cur/&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/.Sent Items/maildirfolder&lt;br /&gt;
			nfs/2/nh/z/e/zella/Maildir/courierpop3dsizelist&lt;br /&gt;
		Note:&lt;br /&gt;
			In the above output, notice that the Maildir is located in &amp;quot;nfs/2/nh/z/e/zella&amp;quot;, it will be used in the next step.&lt;br /&gt;
	d) Copy the Maildir files to the new mailbox storage path using tar (tar does a better job at preserving &lt;br /&gt;
           things like symbolic links and permissions than cp or mv):&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /tmp/nfs/2/nh/z/e/zella; tar cf - Maildir | (cd /nfs/3/nh/z/e/zella; tar xvpf -)&lt;br /&gt;
		Output:&lt;br /&gt;
			You should list the files as it is copied. Basically, similar to what you &lt;br /&gt;
                        saw in step &amp;quot;c&amp;quot; above but without the &amp;quot;nfs/2/nh/z/e/zella&amp;quot; prefix.&lt;br /&gt;
		Note:&lt;br /&gt;
			The command format is basically:&lt;br /&gt;
				cd /tmp/{restored-maildir-path-see-note-in-step-c}; tar cf - Maildir | (cd {new-storage-path-without-Maildir}; tar xvpf -)&lt;br /&gt;
	e) Clean up temporary directory:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /tmp; rm -rf nfs&lt;br /&gt;
&lt;br /&gt;
2) If restoring a zfs mailbox (from the last week) to a particular date&#039;s backup:&lt;br /&gt;
        a) login to the correct sndg-netapp-[1/2/3]-new&lt;br /&gt;
        b) get the files from the correct zfs snapshot&lt;br /&gt;
           Sample commands:&lt;br /&gt;
             cd /tank0/mail/.zfs/snapshot/20180215-0/nh/c/h/christineat&lt;br /&gt;
             ls -l&lt;br /&gt;
           cd to the desired directory and copy the files over to the users mailbox&lt;br /&gt;
             cp -p /tank0/mail/.zfs/20180215-0/nh/c/h/christineat/Maildir/cur /tank0/mail/nh/c/h/christineat/Maildir/cur&lt;br /&gt;
&lt;br /&gt;
3) If restoring a mailbox to a particular date&#039;s backup:&lt;br /&gt;
	a) Look up the storage path to the customer&#039;s mailbox and determine which backup server the &lt;br /&gt;
           dump file is on (see &amp;quot;Looking up a mailbox&#039;s Storage Path&amp;quot; and &amp;quot;Storage path prefixes&amp;quot; above).&lt;br /&gt;
	b) Restore the level 0 and any incremental backups (in order) to /tmp on the backup server:&lt;br /&gt;
		Sample commands:&lt;br /&gt;
			cd /tmp&lt;br /&gt;
			restore4x -if /backup/hosts/2/e0.sndg-netapp-2.nethere.net/20090912-0/mail.dump&lt;br /&gt;
			chflags -R 0 /tmp&lt;br /&gt;
			restore4x -if /backup/hosts/2/e0.sndg-netapp-2.nethere.net/20090913-1/mail.dump&lt;br /&gt;
			chflags -R 0 /tmp&lt;br /&gt;
				...skipped repetitive stuff here...&lt;br /&gt;
			restore4x -if /backup/hosts/2/e0.sndg-netapp-2.nethere.net/20090918-6/mail.dump&lt;br /&gt;
			chflags -R 0 /tmp&lt;br /&gt;
		Note:&lt;br /&gt;
			After each restore, we need to recursively remove all flags from /tmp to eliminate &lt;br /&gt;
                       the immutable flag that gets set on the files (this happens only on NetApp dumps... who knows why).&lt;br /&gt;
	c) Create a new tar of the Maildir directory so that it can be copied to mailbox-1 for further processing:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /tmp/nh/z/e/zella; tar cf /tmp/archive.tar Maildir&lt;br /&gt;
	d) Clean up /tmp:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			rm -rf /tmp/nh&lt;br /&gt;
		Note:&lt;br /&gt;
			Depending on the platform, it might be /tmp/nh, /tmp/si, or /tmp/zn.&lt;br /&gt;
	e) Copy the /tmp/archive.tar file to /tmp on mailbox-1.&lt;br /&gt;
	f) On mailbox-1, extract the Maildir archive on top of what they already have:&lt;br /&gt;
		Sample command:&lt;br /&gt;
			cd /nfs/2/nh/z/e/zella; tar xvpf /tmp/archive.tar&lt;br /&gt;
	g) Remove /tmp/archive.tar.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Beginning of old instructions ====&lt;br /&gt;
Use the same techniques as site restoration, with the following exceptions:&lt;br /&gt;
&lt;br /&gt;
1) Since everything is in maildir format, you have to reassmble things in /tmp/&amp;lt;username&amp;gt;, tar it up, and then restore it on any of the mailbox machines&lt;br /&gt;
   a) using the following syntax: &lt;br /&gt;
      # tar -zcpf /tmp/username.tgz yyyymmdd-#/platform yyyymmdd-#/platform yyyymmdd-#/platform&lt;br /&gt;
      i.e.&lt;br /&gt;
      # tar -zcpf /tmp/username.tgz 20050915-5/nh 20050914-4/nh 20050910-0/nh&lt;br /&gt;
&lt;br /&gt;
2) In regards to mail spools, you&#039;ll need to use the chflags command to adjust the flags on the files. For whatever reason, they are stored with a system immutable flag&lt;br /&gt;
   a) Do chflags -R noschg  &amp;lt;dir&amp;gt; on the restored directory before copying the files with sawfish otherwise, the files cannot be deleted:&lt;br /&gt;
      i.e. &lt;br /&gt;
      # chflags -R noschg /backup1/unix/sndg-netapp-1-e2b.nethere.net/20050105-4/&lt;br /&gt;
   b) After tar/gzipping, remove the restored directories&lt;br /&gt;
      i.e&lt;br /&gt;
      # rm -r /backup1/unix/sndg-netapp-1-e2b.nethere.net/20050105-4/nh&lt;br /&gt;
&lt;br /&gt;
3) Can restore quickly with script on mailbox-1 (or mailbox-2) using the NH script &amp;quot;restoremail&amp;quot;&lt;br /&gt;
   a) # /nethere/sbin/restoremail -h for usage&lt;br /&gt;
      i.e. &lt;br /&gt;
      # restoremail -p nh -f nhusername.tgz -n 2 -u nhusername&lt;br /&gt;
*** End of old instructions ***&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Backup locations *&lt;br /&gt;
&lt;br /&gt;
backup-1:&lt;br /&gt;
backup-3.nethere.net&lt;br /&gt;
fpweb-1.nethere.net&lt;br /&gt;
home-1.nethere.net&lt;br /&gt;
koi.nethere.net&lt;br /&gt;
marmaduke.inetworld.net&lt;br /&gt;
ntdb-1.nethere.net&lt;br /&gt;
ntweb-1.nethere.net&lt;br /&gt;
ntweb-3.nethere.net&lt;br /&gt;
ntweb-5.nethere.net&lt;br /&gt;
ntweb-7.nethere.net&lt;br /&gt;
phoenix.nethere.net&lt;br /&gt;
unixweb-1.nethere.net&lt;br /&gt;
unixweb-3.nethere.net&lt;br /&gt;
unixweb-5.nethere.net&lt;br /&gt;
unixweb-7.nethere.net&lt;br /&gt;
shark.nethere.net&lt;br /&gt;
tetra.nethere.net&lt;br /&gt;
wms-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
backup-2:&lt;br /&gt;
andromeda.nethere.net&lt;br /&gt;
backup-4.nethere.net&lt;br /&gt;
eel.nethere.net&lt;br /&gt;
fpweb-2.nethere.net&lt;br /&gt;
ntdb-2.nethere.net&lt;br /&gt;
ntweb-2.nethere.net&lt;br /&gt;
ntweb-4.nethere.net&lt;br /&gt;
ntweb-6.nethere.net&lt;br /&gt;
unixweb-2.nethere.net&lt;br /&gt;
unixweb-4.nethere.net&lt;br /&gt;
unixweb-6.nethere.net&lt;br /&gt;
ds.znet.com&lt;br /&gt;
mx1.znet.com&lt;br /&gt;
mx2.znet.com&lt;br /&gt;
mx3.znet.com&lt;br /&gt;
dmx.znet.com&lt;br /&gt;
la.znet.com&lt;br /&gt;
uf.znet.com&lt;br /&gt;
&lt;br /&gt;
backup-3:&lt;br /&gt;
backup-1.nethere.net&lt;br /&gt;
lists-1.nethere.net&lt;br /&gt;
mailbox-1.mail.nethere.net&lt;br /&gt;
mailbox-3.mail.nethere.net&lt;br /&gt;
mta-1.mail.nethere.net&lt;br /&gt;
mx-1.nethere.net&lt;br /&gt;
nscache-1.nethere.net&lt;br /&gt;
nsrbl-1.nethere.net&lt;br /&gt;
pegasus.nethere.net&lt;br /&gt;
scan-1.mail.nethere.net&lt;br /&gt;
sndg-netapp-2.nethere.net&lt;br /&gt;
relay-1.mail.nethere.net&lt;br /&gt;
webmail-1.mail.nethere.net&lt;br /&gt;
&lt;br /&gt;
backup-4:&lt;br /&gt;
ahi.nethere.net&lt;br /&gt;
backup-2.nethere.net&lt;br /&gt;
cp-1.nethere.net&lt;br /&gt;
cygnus.nethere.net&lt;br /&gt;
dragon.nethere.net&lt;br /&gt;
lisa.nethere.net&lt;br /&gt;
mailbox-2.mail.nethere.net&lt;br /&gt;
mta-2.mail.nethere.net&lt;br /&gt;
news-1.nethere.net&lt;br /&gt;
nscache-2.nethere.net&lt;br /&gt;
nsrbl-2.nethere.net&lt;br /&gt;
sawfish.nethere.net&lt;br /&gt;
scan-2.mail.nethere.net&lt;br /&gt;
scribe.nethere.net&lt;br /&gt;
sndg-netapp-1.nethere.net&lt;br /&gt;
relay-2.mail.nethere.net&lt;br /&gt;
urchin.nethere.net &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Mail servers ===&lt;br /&gt;
==== Organization ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
                                NetHere Mail Servers&lt;br /&gt;
                                ====================&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
         Customers              Inbound Mail            Outbound Mail&lt;br /&gt;
     /\          ||                 ||                    /\&lt;br /&gt;
     ||          ||                 || smtp               ||&lt;br /&gt;
     ||          ||                 \/                    ||&lt;br /&gt;
     ||          ||             ServerIron                ||&lt;br /&gt;
     ||          ||                 ||                    ||&lt;br /&gt;
     ||          ||                 ||                    ||&lt;br /&gt;
     ||          ||   smtp          \/                    ||&lt;br /&gt;
     ||          || =========&amp;gt;  mta-1    mta-2  ====&amp;gt;   relay-1  &amp;lt;====  unixweb-* servers&lt;br /&gt;
     ||          ||                 ||   /\             relay-2         ntweb servers&lt;br /&gt;
http ||     pop3 ||                 ||   ||&lt;br /&gt;
     ||     imap ||                 \/   +--------------+&lt;br /&gt;
     ||          ||             ServerIron              |&lt;br /&gt;
     ||          ||                 ||                  |&lt;br /&gt;
     ||          ||                 ||                  |&lt;br /&gt;
     ||          ||                 \/                  |&lt;br /&gt;
     ||          ||             nsrbl-1   nsrbl-2       |&lt;br /&gt;
     ||          ||         (spamhaus, dcc blacklist)   |&lt;br /&gt;
     ||          ||                 ||                  |&lt;br /&gt;
     ||          \/                 \/                  |&lt;br /&gt;
     \/         mailbox-1  &amp;lt;=== scan-1    scan-2        |&lt;br /&gt;
  webmail &amp;lt;===&amp;gt; mailbox-2    (sendmail/amavisd/sophos)  |&lt;br /&gt;
          imap  mailbox-3                               |&lt;br /&gt;
                 /\     /\                              |&lt;br /&gt;
                 ||     ||                              \/&lt;br /&gt;
             nfs ||     +---------------------------&amp;gt;  mta-db&lt;br /&gt;
                 ||                                    (LDAP)&lt;br /&gt;
                 \/&lt;br /&gt;
             sndg-netapp-1&lt;br /&gt;
             sndg-netapp-2&lt;br /&gt;
             sndg-netapp-3&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Overview ====&lt;br /&gt;
mta-1.mail.nethere.net- Primary servers for all inbound and outbound mail.&lt;br /&gt;
mta-1.mail.nethere.net-  Primary servers for all inbound and outbound mail.&lt;br /&gt;
&lt;br /&gt;
nsrbl-1.nethere.net- Realtime blacklist check using dccd and spamhaus.  The mta servers query the ip address of the mail server against the blacklist before transferring the email to the scanning servers.&lt;br /&gt;
&lt;br /&gt;
nsrbl-2.nethere.net&lt;br /&gt;
 &lt;br /&gt;
scan-2.mail.nethere.net- Scans email with Sophos (currently disabled) and Spam Assasin before sending it to the mailbox servers.&lt;br /&gt;
scan-2.mail.nethere.net Scans email with Sophos (currently disabled) and Spam Assasin before sending it to the mailbox servers.&lt;br /&gt;
&lt;br /&gt;
mailbox-1.mail.nethere.net- Front end server for the mail directories.  Queries LDAP server for miscellaneous customer information including the directory the email should be stored in, passwords, spam sensitivity levels, etc. before delivering email to appropriate directory.&lt;br /&gt;
mailbox-2.mail.nethere.net&lt;br /&gt;
mailbox-3.mail.nethere.net&lt;br /&gt;
&lt;br /&gt;
Mail is stored in maildir format:&lt;br /&gt;
&lt;br /&gt;
/nfs/&amp;lt;nfs_number&amp;gt;/&amp;lt;platform&amp;gt;/u/s/username&lt;br /&gt;
&lt;br /&gt;
sndg-netapp-1-new.nethere.net- Solaris system using zfs set up as a nfs mount on mailbox-1, -2, and -3.  Email is stored on these platforms for all customers.&lt;br /&gt;
sndg-netapp-2-new.nethere.net&lt;br /&gt;
sndg-netapp-3-new.nethere.net&lt;br /&gt;
&lt;br /&gt;
mx-1.nethere.net- spooling mail server&lt;br /&gt;
&lt;br /&gt;
mailx-1.nethere.net- all outbound mails originating from a web server is sent out this email server.  All web servers are set to use mailx.nethere.net which resolves (through the serveriron) to either mailx-1.nethere.net and mailx-2.nethere.net.  Since we shut down mailx-2, all outbound mail from the web servers goes through mailx-1.  This was to prevent all the customers email from being blacklisted as a result of a web site being compromised.&lt;br /&gt;
&lt;br /&gt;
mta-db.mail.nethere.net- database server for the mta servers.  All postfix databases are stored here.&lt;br /&gt;
&lt;br /&gt;
webmail.nethere.net- Hosts the webmail program for webmail.nethere.net, webmail.simplyweb.net and webmail.znet.net.&lt;br /&gt;
&lt;br /&gt;
==== Client mail settings ====&lt;br /&gt;
&lt;br /&gt;
Inbound Mail Server:&lt;br /&gt;
&lt;br /&gt;
POP3&lt;br /&gt;
pop3.nethere.net&lt;br /&gt;
pop3.znet.net&lt;br /&gt;
pop3.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
IMAP&lt;br /&gt;
imap.nethere.net&lt;br /&gt;
imap.znet.net&lt;br /&gt;
imap.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
Outbound Mail server:&lt;br /&gt;
&lt;br /&gt;
mail.nethere.net&lt;br /&gt;
mail.znet.net&lt;br /&gt;
mail.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
Webmail:&lt;br /&gt;
&lt;br /&gt;
webmail.nethere.net&lt;br /&gt;
webmail.znet.net&lt;br /&gt;
webmail.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
Control Panel:&lt;br /&gt;
&lt;br /&gt;
cp.nethere.net&lt;br /&gt;
cp.znet.net&lt;br /&gt;
cp.simplyweb.net&lt;br /&gt;
&lt;br /&gt;
==== nsrbl Disk Quota Warning ====&lt;br /&gt;
&lt;br /&gt;
In the event you get a nagios regarding space on the two nsrbl servers, &lt;br /&gt;
it is most likely the result of the dccd databases getting too large.  &lt;br /&gt;
To clear out space, log into each nsrbl server and execute the following:&lt;br /&gt;
&lt;br /&gt;
df -h&lt;br /&gt;
&lt;br /&gt;
cd /var/dcc/libexec&lt;br /&gt;
&lt;br /&gt;
./stop-dccd&lt;br /&gt;
&lt;br /&gt;
cd /var/dcc&lt;br /&gt;
&lt;br /&gt;
rm dcc_db*&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
sync&lt;br /&gt;
&lt;br /&gt;
cd /var/dcc/libexec&lt;br /&gt;
&lt;br /&gt;
./start-dccd&lt;br /&gt;
&lt;br /&gt;
ps auxw | grep dccd&lt;br /&gt;
&lt;br /&gt;
====Update outbound quota for a single ip address====&lt;br /&gt;
&lt;br /&gt;
We restrict the total number of emails that a customer can send to 2000 email recepients per 24 hour period.  This is tracked by the ip address of the computer that connects to the mta server when it sends an outbound emal.  The ip address and count is stored in the postfix database server, mta-db.mail.nethere.net.  To increase the quota for a single ip address, you can log into either mta-1, mta-2, or console direcly into the database server.  To use one of the mta servers:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1.  Log onto mta-1.mail&lt;br /&gt;
2.  Switch to mysql&lt;br /&gt;
$mysql -h 10.0.0.69 -u policyd_outbound -p&lt;br /&gt;
username:  policyd_outbound&lt;br /&gt;
password:  p0stf1x!&lt;br /&gt;
3.  Review database information&lt;br /&gt;
mysql&amp;gt;show databases;&lt;br /&gt;
4.  Switch to the policyd table. &lt;br /&gt;
mysql&amp;gt;use policyd;&lt;br /&gt;
5.  Review table header information&lt;br /&gt;
mysql&amp;gt;show tables;&lt;br /&gt;
mysql&amp;gt;describe throttle;&lt;br /&gt;
6.  Verify the ip address is above quota&lt;br /&gt;
mysql&amp;gt;select * from throttle where _from=&#039;xxx.xxx.xxx.xxx&#039;;   #this is the ip address of the account you are increasing.&lt;br /&gt;
7.  Increase the maxium number of recepients the ip address can send to&lt;br /&gt;
mysql&amp;gt;update throttle SET _rcpt_max = _rcpt_max + 20000 where _from=&#039;24.249.205.66&#039;;&lt;br /&gt;
8.  Verify&lt;br /&gt;
mysql&amp;gt;select * from throttle where _from=&#039;xxx.xxx.xxx.xxx&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Remove a blocked mail server ====&lt;br /&gt;
&lt;br /&gt;
The message in maillog helps identify where it is blocked.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Relaying denied due to excessive spam                             admin-2:/dist/files/nhmta/nethere/conf/postfix/common/client_reject&lt;br /&gt;
Sender address rejected: Relaying denied due to Spam              admin-2:/dist/files/nhmta/nethere/conf/postfix/common/sender_reject&lt;br /&gt;
Relaying denied due to SPAM                                       policyd???&lt;br /&gt;
Client host rejected: ... listed at zen.spamhaus.org=127.0.0.2    nsrbl-1:/named/rbldns/zones/*.spamhaus.org&lt;br /&gt;
status=sent (250 2.7.1 Ok, discarded, id=95886-33 - SPAM)         user mail protection in provisioning????&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Remove a mail server from policyd blacklist====&lt;br /&gt;
&lt;br /&gt;
The ip address of a mail server can be placed on a blacklist for several reasons, to high of rate of inbound email from a single ip address, incorrectly formatted helo, etc.  In the event you need to remove a mail server&#039;s ip from the policyd blacklist, you can log into either mta server of connect directly to the policyd database server, mta-db.mail.nethere.net.  To remove the ip address using a mta server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1.  Log onto mta-1.mail&lt;br /&gt;
2.  Switch to mysql&lt;br /&gt;
$mysql -h 10.0.0.69 -u policyd_inbound -p policyd&lt;br /&gt;
username:  policyd_inbound&lt;br /&gt;
password:  p0stf1x!&lt;br /&gt;
3.  Display database information&lt;br /&gt;
mysql&amp;gt;show databases;&lt;br /&gt;
4.  Use policyd database&lt;br /&gt;
mysql&amp;gt;use policyd;&lt;br /&gt;
4.  Display table information&lt;br /&gt;
mysql&amp;gt;show tables;&lt;br /&gt;
mysql&amp;gt;describe blacklist;&lt;br /&gt;
5.  Verify ip address has been blacklisted&lt;br /&gt;
mysql&amp;gt;select * from blacklist where _blacklist=&#039;xxx.xxx.xxx.xxx&#039;; &lt;br /&gt;
6.  Delete the entry&lt;br /&gt;
mysql&amp;gt;delete from blacklist where _blacklist=&#039;xxx.xxx.xxx.xxx&#039;;&lt;br /&gt;
7.  Verify&lt;br /&gt;
mysql&amp;gt; select * from blacklist where _blacklist=&#039;xxx.xxx.xxx.xxx&#039;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The suspected mail server may also be in the helo table.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mysql&amp;gt; describe helo ;&lt;br /&gt;
1.  Check the IP address&lt;br /&gt;
mysql&amp;gt; select * from helo where _host=&#039;xxx.xxx.xxx.xxx&#039; ;&lt;br /&gt;
2.  Check the server helo name &lt;br /&gt;
mysql&amp;gt; select * from helo where _helo like &#039;%server%&#039; ;&lt;br /&gt;
3.  Delete the entry&lt;br /&gt;
mysql&amp;gt; delete from helo where _host=&#039;xxx.xxx.xxx.xxx&#039; ;&lt;br /&gt;
mysql&amp;gt; delete from helo where _helo like &#039;%server%&#039; ;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Remove a mail server from spamhaus blacklist====&lt;br /&gt;
&lt;br /&gt;
ssh to nsrbl-1.mail  and nsrbl-2.mail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
cd /named/rbldns/zones &lt;br /&gt;
&lt;br /&gt;
find which file the IP Address is in.&lt;br /&gt;
&lt;br /&gt;
grep &amp;lt;IP Addr&amp;gt; *.spamhaus.org&lt;br /&gt;
co -l &amp;lt;xxx.spamhaus.org&amp;gt;&lt;br /&gt;
vi xxx.spamhaus.org&lt;br /&gt;
ci -u &amp;lt;xxx.spamhaus.org&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Add ip address to client_reject====&lt;br /&gt;
&lt;br /&gt;
The client_reject list is one of the ways we combat spam.  If we receive spam compaints from a mail server or otherwise identitfy a mail server as a source of spam, we can add it to a client_reject list.  An email sent from a mail server on the client_reject list is rejected with a notice saying &amp;quot;Relaying denied due to excessive spam&amp;quot;. To add an ip address to the list, we use admin-2.  Once the list is updated on admin-2, it is pushed out to mta-1 and mta-2.  To add an ip address on admin-2:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Log into admin-2.nethere.net.&lt;br /&gt;
bash-3.00# cd /dist/files/nhmta/nethere/conf/postfix/common&lt;br /&gt;
bash-3.00# co -l client_reject&lt;br /&gt;
RCS/client_reject,v  --&amp;gt;  client_reject&lt;br /&gt;
bash-3.00# vi client_reject&lt;br /&gt;
&lt;br /&gt;
-Add the ip address to the bottom of the list in the appropriate format.  It you want to reject any email from the mail server, you would use:&lt;br /&gt;
xxx.xxx.xxx      REJECT Relaying denied due to excessive spam&lt;br /&gt;
where xxx.xxx.xxx is the /24 subnet of the mail server the spam originated from.  Do not place a period after the last octet.  For example, to add a /16 subnet you would use:&lt;br /&gt;
xxx.xxx          REJECT Relaying denied due to excessive spam&lt;br /&gt;
The REJECT key word tells postfix to reject the email.  If you want to allow email from a /24 subnet, use the following format:&lt;br /&gt;
xxx.xxx.xxx      OK&lt;br /&gt;
This tells postfix to accept email from that subnet.&lt;br /&gt;
&lt;br /&gt;
bash-3.00# ci -u client_reject&lt;br /&gt;
RCS/client_reject,v  &amp;lt;--  client_reject&lt;br /&gt;
new revision: 1.286; previous revision: 1.285&lt;br /&gt;
enter log message, terminated with single &#039;.&#039; or end of file:&lt;br /&gt;
&amp;gt;&amp;gt; .&lt;br /&gt;
done&lt;br /&gt;
bash-3.00# make&lt;br /&gt;
/nethere/software/nhmta/postfix-2.4.5/sbin/postmap -C . client_reject&lt;br /&gt;
mv client_reject.db maps/hash/client_reject.db&lt;br /&gt;
bash-3.00# cd /dist/rdist&lt;br /&gt;
bash-3.00# gmake nhmta-update&lt;br /&gt;
updating host mta-1.mail.nethere.net&lt;br /&gt;
updating: /dist/files/nhmta//nethere/conf/postfix/common/maps/hash/client_reject.db&lt;br /&gt;
updating: /dist/files/nhmta//nethere/conf/postfix/common/client_reject&lt;br /&gt;
updating host mta-2.mail.nethere.net&lt;br /&gt;
updating: /dist/files/nhmta//nethere/conf/postfix/common/maps/hash/client_reject.db&lt;br /&gt;
updating: /dist/files/nhmta//nethere/conf/postfix/common/client_reject&lt;br /&gt;
bash-3.00#&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Free mysql disk space on mta-db.mail.nethere.net====&lt;br /&gt;
&lt;br /&gt;
In response to nagios alert &amp;quot;[Nagios] PROBLEM alert - mta-db.mail.nethere.net/Disk mysql is WARNING&amp;quot;, to clear space within the mysql database:&lt;br /&gt;
&lt;br /&gt;
1.  ssh to mta-db.mail.nethere.net.&lt;br /&gt;
&lt;br /&gt;
2.  Log into mysql and execute the following:&lt;br /&gt;
&lt;br /&gt;
[root@mta-db user]# mysql -uroot -p&lt;br /&gt;
&lt;br /&gt;
Enter password:  dB@dm1N!&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; show databases;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; use policyd;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; show tables;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; describe helo;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; describe throttle;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; describe throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from throttle_from_instance limit 10;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select now();&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select unix_timestamp(now());&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select count(*) from throttle_from_instance where _expire &amp;gt; 1426534697;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select max(_expire) from throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; truncate table throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select count(*) from throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; select * from throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; show tables;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; optimize table throttle_from_instance;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; show databases;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; use policyd;&lt;br /&gt;
&lt;br /&gt;
mysql&amp;gt; quit&lt;br /&gt;
&lt;br /&gt;
[root@mta-db user]# df -h&lt;br /&gt;
&lt;br /&gt;
== Errors ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Common webmail errors ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Error:&lt;br /&gt;
--cut--&lt;br /&gt;
Fatal error: Call to undefined function: applicatio€”¬p() in&lt;br /&gt;
/www/webmail.nethere.net/htdocs/x/m/templates/message/navbar.inc on line 7&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
Solution: Restart apache on webmail-1 (apachectl restart)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Error:&lt;br /&gt;
--cut--&lt;br /&gt;
ERROR There was an error sending your message: unable to add recipient&lt;br /&gt;
[webhosting@nethere.com]: Invalid response code received from server&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
Solution: More than likely DNS/domain issue, but check mta-1.nethere.net to be sure:&lt;br /&gt;
--cut--&lt;br /&gt;
Aug 30 15:08:54 mta-1 nh/smtpd[50501]: NOQUEUE: reject: RCPT from webmail-1.mail.nethere.net[66.63.128.181]: 450 &amp;lt;orders@mulligrins.com&amp;gt;: Sender address rejected: Domain not found; from=&amp;lt;orders@mulligrins.com&amp;gt; to=&amp;lt;webhosting@nethere.com&amp;gt; proto=ESMTP helo=&amp;lt;webmail.nethere.net&amp;gt;&lt;br /&gt;
--cut--&lt;br /&gt;
Problem: Webmail shows &amp;quot;1-5&amp;quot; messages but inbox appears empty. POP shows messages #1,2,3,4,5... could not be retrieved... server response error cannot open the message file it&#039;s gone.&lt;br /&gt;
&lt;br /&gt;
Solution: Ownership issue&lt;br /&gt;
&lt;br /&gt;
1) Use the provisioning tool to determine mailbox location:&lt;br /&gt;
  a) Click on email account, Engineering&lt;br /&gt;
  b) Location is the &amp;quot;Storage Path:&amp;quot; &lt;br /&gt;
&lt;br /&gt;
2) Log in to either mail server then change ownership of maildir to mailuser:mailuser&lt;br /&gt;
  # chown -R mailuser:mailuser /nfs/#/platform/u/s/username&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DEPRECIATED INFORMATION ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Webservers ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Allowing SSI (Server Side Includes) ====&lt;br /&gt;
&lt;br /&gt;
***** NOTE: This has been DEPRECIATED, all servers now allow SSI by default *****&lt;br /&gt;
&lt;br /&gt;
We do not allow EXEC permissions for SSI.&lt;br /&gt;
&lt;br /&gt;
1) Check out Apache config for www.domain.tld in /nethere/conf/apache/conf/&amp;lt;vhosts,nvhosts&amp;gt;&lt;br /&gt;
   # co -l www.domain.tld(.common)  &lt;br /&gt;
&lt;br /&gt;
2) Add the following lines under the &amp;lt;Directory&amp;gt; directive&lt;br /&gt;
--cut--&lt;br /&gt;
AddType text/html .shtml&lt;br /&gt;
AddHandler server-parsed .shtml&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
3) Edit the &amp;quot;Options&amp;quot; line to be the following:&lt;br /&gt;
--cut--&lt;br /&gt;
Options Indexes FollowSymLinks IncludesNOEXEC&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
4) Add a DirectoryIndex line above the &amp;lt;Directory&amp;gt; directive with the following:&lt;br /&gt;
--cut--&lt;br /&gt;
DirectoryIndex index.shtml index.html index.htm home.html home.shtml index.php&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
** In summary, the config should look something similar to this **&lt;br /&gt;
--cut--&lt;br /&gt;
DocumentRoot &amp;quot;/www/www.domain.tld/htdocs&amp;quot;&lt;br /&gt;
  DirectoryIndex index.shtml index.html index.htm home.html home.shtml &lt;br /&gt;
  &amp;lt;Directory &amp;quot;/www/www.domain.tld/htdocs&amp;quot;&amp;gt;&lt;br /&gt;
    AddType text/html .shtml&lt;br /&gt;
    AddHandler server-parsed .shtml&lt;br /&gt;
    Options Indexes FollowSymLinks IncludesNOEXEC&lt;br /&gt;
    Order allow,deny&lt;br /&gt;
    Allow from all&lt;br /&gt;
  &amp;lt;/Directory&amp;gt;&lt;br /&gt;
--cut--&lt;br /&gt;
&lt;br /&gt;
5) Check in Apache config&lt;br /&gt;
   # ci -u www.domain.tld(.common) &amp;lt; /dev/null  &lt;br /&gt;
&lt;br /&gt;
6) Restart Apache&lt;br /&gt;
   # apachectl configtest&lt;br /&gt;
   # apachectl restart&lt;br /&gt;
&lt;br /&gt;
More SSI notes can be found here:&lt;br /&gt;
http://httpd.apache.org/docs/1.3/howto/ssi.html&lt;br /&gt;
&lt;br /&gt;
= SimplyInternet =&lt;br /&gt;
&lt;br /&gt;
= Z-net =&lt;br /&gt;
&lt;br /&gt;
= Migrating to LAMP Host Servers =&lt;br /&gt;
&lt;br /&gt;
Below are outlines of the steps to migrate website &amp;amp; email hosting from nethere to lamphost configured servers. This is almost certainly incomplete at this stage. The exact steps to follow can vary depending on specifics of the client, so use this as a guide only. Be ready to adjust commands or add steps as needed.&lt;br /&gt;
&lt;br /&gt;
== Edit Nethere DNS ==&lt;br /&gt;
&lt;br /&gt;
ssh user@admin-1.nethere.net&lt;br /&gt;
&lt;br /&gt;
sudo -i&lt;br /&gt;
&lt;br /&gt;
ssh ns1.nethere.net&lt;br /&gt;
&lt;br /&gt;
cd /named/master&lt;br /&gt;
&lt;br /&gt;
co -l example.com&lt;br /&gt;
&lt;br /&gt;
vi example.com&lt;br /&gt;
&lt;br /&gt;
ci -u example.com&lt;br /&gt;
&lt;br /&gt;
rndc reload example.com&lt;br /&gt;
&lt;br /&gt;
Make sure to increment serial number by 1 each time a change is made.&lt;br /&gt;
&lt;br /&gt;
== Migrate Website to nh3.jcihosting.net ==&lt;br /&gt;
&lt;br /&gt;
Set TTL to 300 on ns1.nethere.net&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Open up these web admins to get current account/site info:&lt;br /&gt;
&lt;br /&gt;
http://toolbox.nethere.net/ -&amp;gt; Provisioning Tool (search for domain name)&lt;br /&gt;
&lt;br /&gt;
https://secure.johncompanies.com/mgmt/index.html (search for col#####)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1.  Create the lamphost user.&lt;br /&gt;
&lt;br /&gt;
https://www.lamphost.com/admin/user/user/create&lt;br /&gt;
&lt;br /&gt;
username: col#####&lt;br /&gt;
&lt;br /&gt;
email: same as listed for col##### account&lt;br /&gt;
&lt;br /&gt;
password: random, secure, won&#039;t be shared with client&lt;br /&gt;
&lt;br /&gt;
defaults on rest of page&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2.  Create the customer account&lt;br /&gt;
&lt;br /&gt;
https://nh1.jcihosting.net:8443/&lt;br /&gt;
&lt;br /&gt;
https://nh2.jcihosting.net:8443/&lt;br /&gt;
&lt;br /&gt;
https://nh3.jcihosting.net:8443/&lt;br /&gt;
&lt;br /&gt;
https://nh4.jcihosting.net:8443/&lt;br /&gt;
&lt;br /&gt;
LAMP Host Users -&amp;gt; Add User -&amp;gt; col#####&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Virtual Hosts -&amp;gt; Add Virtual Host&lt;br /&gt;
&lt;br /&gt;
Username: Select same username as above from pulldown&lt;br /&gt;
&lt;br /&gt;
Domain: example.com&lt;br /&gt;
&lt;br /&gt;
Hosting Plan: select plan that matches what&#039;s listed in https://secure.johncompanies.com/mgmt/view.html?cid=col#####&lt;br /&gt;
&lt;br /&gt;
Add default DNS: No&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.  Virtual Hosts -&amp;gt; Site Manager (next to domain added)&lt;br /&gt;
&lt;br /&gt;
Site Manager Users -&amp;gt; Add User&lt;br /&gt;
&lt;br /&gt;
Username: can be anything - I typically use Optigold Login from nethere admin&lt;br /&gt;
&lt;br /&gt;
Password: make up something secure and record - this one will go to the user&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.  Email &amp;amp; FTP -&amp;gt; Add Account&lt;br /&gt;
&lt;br /&gt;
Email Account: ftp&lt;br /&gt;
&lt;br /&gt;
Password: make up something secure and record - this one will go to the user&lt;br /&gt;
&lt;br /&gt;
Enable FTP Access: Yes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5.  Optional: Add MySQL database if site needs it (WordPress etc)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ssh to nh3.jcihosting.net and run &#039;add_mysql.php &amp;lt;example.com&amp;gt;&#039;&lt;br /&gt;
&lt;br /&gt;
MySQL database info will be emailed to webmaster@lamphost.com (and output on command line - ignore instructions to run additional commands)&lt;br /&gt;
&lt;br /&gt;
https://mail.lamphost.com&lt;br /&gt;
Sm77DdnQ&lt;br /&gt;
&lt;br /&gt;
6.  Copy over website&lt;br /&gt;
&lt;br /&gt;
ssh nh3&lt;br /&gt;
&lt;br /&gt;
cd /var/www/example.com/&lt;br /&gt;
&lt;br /&gt;
rsync -v --archive --one-file-system --delete --delete-during --rsh=/usr/bin/ssh dsmith@unixweb-10.nethere.net:/www/www.example.com/htdocs ./&lt;br /&gt;
&lt;br /&gt;
Change &#039;matt&#039; to your username. Note that I had to add my username to groups httpd &amp;amp; webuser to allow me to copy website files without being root.&lt;br /&gt;
&lt;br /&gt;
chown -R col#####:col##### htdocs&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
7.  If site has MySQL data find the connection info. For example, WP sites will have the info in htdocs/wp-config.php&lt;br /&gt;
&lt;br /&gt;
mysqldump -h mysqldb-1.webhost.nethere.net -u username --password=password databasename &amp;gt; databse.sql&lt;br /&gt;
&lt;br /&gt;
mysql exampledotcom &amp;lt; database.sql&lt;br /&gt;
&lt;br /&gt;
rm database.sql&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
8.  If site is ready to be made live on nh3.lamphost.net then update DNS on ns1.nethere.com&lt;br /&gt;
&lt;br /&gt;
Send email to client with details. Search support@jcihosting.com Sent folder for emails with Subject &amp;quot;updated hosting for&amp;quot; for example emails. You will have to adjust as needed (username/passwords, different info based on specific client).&lt;br /&gt;
&lt;br /&gt;
9.  After you finish, update the johncompanies.com backend for the col0# CID and change the nethere hosting server to nh3.jcihosting.net.&lt;br /&gt;
&lt;br /&gt;
10. To add a SSL certificate&lt;br /&gt;
        letsencrypt-create.php &amp;lt;domain.com&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    To remove a SSL certificate&lt;br /&gt;
        del-letsencrypt-certificate.php &amp;lt;domain.com&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Migrate Email to nh3.jcihosting.net ==&lt;br /&gt;
&lt;br /&gt;
If migrating website &amp;amp; emails both follow the above steps to transfer the website first.&lt;br /&gt;
&lt;br /&gt;
If migrating emails only the follow the first steps 1-3 of the above steps to set up the virtualhost on nh3.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1.  Set TTL for MX record to 300.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2.  Generate email report:&lt;br /&gt;
&lt;br /&gt;
Log in to http://toolbox.nethere.net/&lt;br /&gt;
&lt;br /&gt;
Select Provisioning Tool and search for the domain&lt;br /&gt;
&lt;br /&gt;
Navigate to the Email section and then Generate Report.&lt;br /&gt;
&lt;br /&gt;
Select these fields for the report:&lt;br /&gt;
&lt;br /&gt;
Mail address&lt;br /&gt;
&lt;br /&gt;
Alias&lt;br /&gt;
&lt;br /&gt;
Forwarding address&lt;br /&gt;
&lt;br /&gt;
Mail storage path&lt;br /&gt;
&lt;br /&gt;
Copy the result of the report and save it as a text file in /tmp/ on nh3.jcihosting.net named example.com.txt (replace domain name)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.  Create the e-mail accounts&lt;br /&gt;
&lt;br /&gt;
Make sure the virtualhost has enough email accounts allocated before proceeding. &lt;br /&gt;
Edit the virtualhosts.virtualhost_settings table directly if need be to increase allocation.&lt;br /&gt;
&lt;br /&gt;
    Log in here: https://nh3.jcihosting.net:8443/&lt;br /&gt;
    Select &amp;quot;MySQL.LH&amp;quot;&lt;br /&gt;
    Go to the &amp;quot;virtualhosts&amp;quot; database&lt;br /&gt;
    Search the &amp;quot;virtualhost_settings&amp;quot; settings for &#039;example.com&#039; &lt;br /&gt;
       and adjust the &#039;email_accounts&#039; setting.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run this script once and only once:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/local/bin/migrate_mail_accounts_from_nethere.php example.com&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That will create all email accounts and aliases.&lt;br /&gt;
&lt;br /&gt;
A file containing the list of email accounts and their new passwords will &lt;br /&gt;
be created as /tmp/example.com-passwords.txt. Grab a copy of that for &lt;br /&gt;
sending to the client.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Sync the emails&lt;br /&gt;
&lt;br /&gt;
Another file will be created that contains the bash commands to sync emails from the old server. This will be named /tmp/example.com-sync.sh&lt;br /&gt;
&lt;br /&gt;
Copy the file somewhere like the /root/ home dir and add &amp;quot;#!/bin/bash&amp;quot; as the first line and change perms to 750. Now you can run this script to sync all mails from nethere to nh3. Run it immediately to get an initial sync of emails.&lt;br /&gt;
&lt;br /&gt;
When ready to switch email hosting over run the sync script one more time, then update DNS on nh3.nethere.net.&lt;br /&gt;
&lt;br /&gt;
If you&#039;d like to sync again after the move (if there was a delay or you think there may have been incoming emails during the transtion) remove the &#039;--del&#039; option from the sync script and then re-run.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
5.  Send email to customer&lt;br /&gt;
&lt;br /&gt;
Search support@jcihosting.com Sent folder for messages with Subject &amp;quot;new email hosting for&amp;quot; for examples of emails to use for communicating with the client.&lt;br /&gt;
&lt;br /&gt;
6.  Update DNS and put NetHere mail on hold&lt;br /&gt;
Once DNS has been pointed to the new server, go into the NetHere Toolbox http://toolbox.nethere.net/ and go to Domain -&amp;gt; General and then set Status to &#039;On Hold&#039;.&lt;br /&gt;
&lt;br /&gt;
7.  Update host in JohnCompanies database.&lt;br /&gt;
&lt;br /&gt;
== Moving between Host Servers on Lamphost ==&lt;br /&gt;
&lt;br /&gt;
You should delete the old virtual host before creating the new&lt;br /&gt;
virtual host.  It may take up to an hour to delete the old virtual&lt;br /&gt;
host.  To speed up the delete:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /opt/lamphost/cron-bin/del_virtualhosts.php&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you create the new virtual host before, you will need to update&lt;br /&gt;
the mylampsite.com:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /opt/lamphost/bin/update_mylampsite_subdomain.php databaseproviders.com&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Upgrade a Lamphost server to increase database limit ==&lt;br /&gt;
&lt;br /&gt;
I&#039;ve increased the database limit for this customer to 2. Now they can add a second db via the Site Manager admin.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Here is how I made the adjustment (we don&#039;t have a friendly admin for this function):&lt;br /&gt;
Log in to Server Manager: https://nh3.jcihosting.net:8443/&lt;br /&gt;
Select the &amp;quot;MySQL.LH&amp;quot; phpMyAdmin&lt;br /&gt;
Select &#039;virtualhosts&#039; db.&lt;br /&gt;
Adjust the &#039;databases&#039; setting for the argee.com domain in the &#039;virtualhost_settings&#039; table.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== looking in the mail log files ==&lt;br /&gt;
&lt;br /&gt;
exigrep&lt;/div&gt;</summary>
		<author><name>75.80.128.147</name></author>
	</entry>
</feed>