-
Notifications
You must be signed in to change notification settings - Fork 4
OpenvCloud Installation guide
The current architecture of OpenvCloud is composed of
Here is the different VM required:
- Master : Contains all the mountaintop/jumpscale components, see cb_master_aio
- Identity : Contains the oauth server used for single sign-in.
- DCPM : Contains DCPM and communicates with the ractivity PDU
- Proxy : Contains an nginx that proxy all the different component and does the SSL offloading.
We use the super micro F628R3-RC1BPT as cpu and storage node.
This server is composed of 4 nodes in one 4U chassis.
Each node is composed of 2 SSD and 6 HDD.
On each nodes is installed :
- OpenvStorage
- CPU node component of mountaintop, see cb_cpunode_aio
This chapter will guide you through all the steps to install a full OpenvCloud system.
Create a new VM and install Jumpscale7 on it.
curl https://raw.githubusercontent.com/Jumpscale/jumpscale_core7/master/install/install.sh > /tmp/js7.sh && bash /tmp/js7.sh
Add the openvcloud domain to @ys
edit /opt/jumpscale7/hrd/system/atyourservice.hrd
metadata.jumpscale =
url:'https://github.com/Jumpscale/ays_jumpscale7',
# add this domain
metadata.openvcloud =
url:'https://git.aydo.com/0-complexity/openvcloud_ays',
finally install the cb_master_aio service
ays install -n cb_master_aio
Please provide value for param.publicip.gateway of type str
: 192.168.57.254 # gateway of the cloudscape
Please provide value for param.publicip.netmask of type str
: 255.255.255.0 # netmask of the cloudscape
Please provide value for param.publicip.start of type str
: 192.168.57.200
Please provide value for param.publicip.end of type str
: 192.168.57.240
Please provide value for mothership1.cloudbroker.defense_proxy of type str
: http://192.168.57.7/ # ip of the vm in the cloudspace
Please provide value for cloudbroker.portalurl of type str
: http://192.168.57.7:82 # ip of the vm in the cloudspace
url DCPM is hosted on [https://dcpmcustomer.demo.greenitglobe.com]: https://dcpmx.demo.greenitglobe.com #replace 'x' with the number of demoenv
url OpenvStorage is hosted on [https://ovsx.demo.greenitglobe.com]: https://ovs1.demo.greenitglobe.com #replace 'x' with the number of demoenv
url the gridportal is hosted on
[https://customer.demo.greenitglobe.com]: https://demox.demo.greenitglobe.com #replace 'x' with the number of demoenv
Let @ys install everything and you should be done with the master.
Create a new VM and install Jumpscale7 on it.
curl https://raw.githubusercontent.com/Jumpscale/jumpscale_core7/master/install/install.sh > /tmp/js7.sh && bash /tmp/js7.sh
Write git credentials to a file so the installer doesn't ask it every time.
cat <<EOT > /opt/jumpscale7/hrd/system/whoami.hrd
git.login = rtreadonly
git.passwd = r4ckt!v1ty
EOT
Install the @ys package for DCPM
ays -n dcpm install
Install reportlab
apt-get install python-reportlab
Execute the DCPM qpackage installer
/opt/qbase5/qshell -c "p.application.install('dcpm')"
Set the owner of the log-directory to make sure logfiles can be created as needed
chown -R syslog:adm /opt/qbase5/var/log/dcpm/
Create a new VM and install Jumpscale7 on it.
curl https://raw.githubusercontent.com/Jumpscale/jumpscale_core7/master/install/install.sh > /tmp/js7.sh && bash /tmp/js7.sh
Add the openvcloud domain to @ys
edit /opt/jumpscale7/hrd/system/atyourservice.hrd
metadata.jumpscale =
url:'https://github.com/Jumpscale/ays_jumpscale7',
# add this domain
metadata.openvcloud =
url:'https://git.aydo.com/0-complexity/openvcloud_ays',
finally install the openvcloud_ssloffloader service
first you need to make sure the bios and raid controller is properly configure.
see this guide : https://git.aydo.com/0-complexity/openvcloud/wikis/SuperMicroX10DRFR-NTBIOS
Currently we only support ubuntu14.02.
During the installation, make sure to follow this disk layout
disk | type | size | mountpoint |
---|---|---|---|
SSD1 | boot grub | 8MB | none |
SSD1 | none | 20GB | none |
SSD1 | swap | 4GB | none |
SSD1 | xfs | all remaining space | /mnt/cache2 |
SSD2 | boot grub | 8MB | none |
SSD2 | none | 20GB | none |
SSD2 | swap | 4GB | none |
SSD2 | xfs | all remaining space | /var/tmp |
Then create a raid1 device on the two 20GB partition.
disk | type | size | mountpoint |
---|---|---|---|
raid1 | btrfs | 20GB | / |
Once the installation of ubuntu is done, update the kernel.
OpenvCloud has been tested with v4.0.5
you can download it from http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.5-wily/
wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.5-wily/linux-headers-4.0.5-040005-generic_4.0.5-040005.201506061639_amd64.deb
wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.5-wily/linux-headers-4.0.5-040005_4.0.5-040005.201506061639_all.deb
wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0.5-wily/linux-image-4.0.5-040005-generic_4.0.5-040005.201506061639_amd64.deb
dpkg -i linux*.deb
Install openvStorage on all nodes using the @ys package
ays install -n openvstorage
sandbox:/opt/jumpscale7
OVS: Target node ip: 192.168.1.11
OVS: Target node password: supersecret
OVS: Cluster name [kvm131]: cluster1
OVS: Master Host Ip (leave empty for master installation)
:
OVS: Master Host Password (leave empty for master installation)
:
OVS Oauth: token uri: https://demo2.demo.greenitglobe.com/login/oauth/access_token
OVS Oauth: authorize uri: https://demo2.demo.greenitglobe.com/login/oauth/authorize
OVS Oauth: client ID [ovs]: ovs
OVS Oauth: client secret: supersecret
OVS Oauth: scope [ovs_admin]: ovs_admin
For the first node, make sure to leave the Master Host IP
empty so @ys know it has to install the first node as Master of the OVS cluster.
For the other nodes make sure to fill Master Host IP
.
One OVS is installed on all the nodes, you have to create a storage backend and a Vpoll.
To do so, follow the documentation on OVS web site : http://doc.openvstorage.com/doc_openvstorage/KVM%20Installation#create-an-open-vstorage-backend
After creating the vPool, make sure that all the nodes are indeed part of this pool. In the ovs GUI, click on vPools, then select the vPool you just created. In the bottom of the page you will have a tab called 'Management action' click on it, select all the nodes and then click finish.
Double check the status of the storage routers, make sure everything is green, if it's, you should be done with the installation of OVS.
!! You should have OVS installed on the node before installing this service !!
All the component for the cpunode is package into a @ys service.
just install cb_cpunode_aio
ays install -n cb_cpunode_aio
sandbox:/opt/jumpscale7
Please provide value for param.rootpasswd of type str
: supersecret
Master node address: 234.123.321.233 # give the public ip of the managememt cloudspace
IP for gw_mgmt [10.199.0.2/22]: 10.199.0.2/22 # make sure to give a different ip to every nodes
Enter gateway and address for backplane1:
(Enter answer over multiple lines, end by typing '.' (without the quotes) on an empty line)
address:192.168.1.10/24 # ip and netmask of the node
gateway:192.168.1.1 # public gateway
.
Please provide value for netconfig.backplanes.backplane1.backplaneinterface of type str
: eth0
Please provide value for netconfig.vxbackend.ipaddr of type str
[240.0.0.1/16]: 10.240.0.1/16 # use the range 10.240.0.0/16
Be carefull when installing this service, doing a mistake in the gateway, address interface name of backplane1 can destroy the network of the machine and make it unreachable, so if you are installing it remotely be very carefull !