Openstack Distributed Setup

This is guide for openstack distributed setup.

We are going to setup Controller machine, Dom0 with Xen hypervisor and Compute machine.

Here I am assuming my

Controller Machine IP as 10.35.34.207,

Compute machine IP as 10.35.34.208 and

Hypervisor IP as 10.35.34.13

Please update the IPs as per your environment while following this document.

A) Setup Openstack Controller

Use : Ubuntu 12.04 server for controller setup.

Its good to use physical machine for Controller setup.

Use the Ubuntu Cloud Archive for Havana

  • Install the Ubuntu Cloud Archive for Havana
# apt-get install python-software-properties

# add-apt-repository cloud-archive:havana

  • Update the package database, upgrade your system, and reboot
# apt-get update && apt-get dist-upgrade

# reboot

Basic Operating System Configuration

  • MySQL DB Setup

Install mysql packages on controller

# apt-get install python-mysqldb mysql-server

  • Edit /etc/mysql/my.cnf and set the bind-address to the IP address of the controller
bind-address = 10.35.34.207
  • Restart mysql service
# service mysql restart
  • Delete the anonymous users that are created when the database is first started
# mysql_install_db

# mysql_secure_installation

  • This command presents a number of options for you to secure your database installation. Respond yes to all prompts unless you have a good reason to do otherwise.

Messaging Server(RabbitMQ) Setup

  • On the controller node, install the messaging queue server RabbitMQ
# apt-get install rabbitmq-server

  • Change the default guest password of RabbitMQ
# rabbitmqctl change_password guest
  • Identity Service Setup & Configuration
  • Install the Identity Service
  • Install the OpenStack Identity Service on the controller node
# apt-get install keystone
  • Edit /etc/keystone/keystone.conf and change the [sql] section
[sql]

....

connection = mysql://keystone:@10.35.34.207/keystone

....

  • Delete the keystone.db file created in the /var/lib/keystone/ directory so that it does not get used
# rm /var/lib/keystone/ keystone.db
  • Create a keystone database
# mysql -u root -p

mysql> CREATE DATABASE keystone;

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '';

mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '';

mysql> quit

  • Create the database tables for the Identity Service
# keystone-manage db_sync

  • Use openssl to generate a random token and store it in the configuration file
# openssl rand -hex 10

7c9a33aeae6a34ad6ff1

  • Edit /etc/keystone/keystone.conf and change the [DEFAULT] section, replacing ADMIN_TOKEN with the results of the command
[DEFAULT]

....

# A "shared secret" between keystone and other openstack services

admin_token = 7c9a33aeae6a34ad6ff1

....

  • Restart the keystone service
# service keystone restart

  • Define users, tenants, and roles

set OS_SERVICE_TOKEN, as well as OS_SERVICE_ENDPOINT to specify where the Identity Service is running

# export OS_SERVICE_TOKEN=7c9a33aeae6a34ad6ff1

# export OS_SERVICE_ENDPOINT=http://10.35.34.207:35357/v2.0

  • Create a tenant for an administrative user and a tenant for other OpenStack services to use
# keystone tenant-create --name=admin --description="Admin Tenant"

# keystone tenant-create --name=service --description="Service Tenant"

  • Create an administrative user called admin
# keystone user-create --name=admin –pass=

  • Create a role for administrative tasks called admin
# keystone role-create –name=admin

Add roles to users

# keystone user-role-add --user=admin --tenant=admin –role=admin
  • Define service and API endpoint
  • Create a service entry for the Identity Service
# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
  • Specify an API endpoint for the Identity Service by using the returned service ID in above command output
# keystone endpoint-create \

--service-id=7cfc6f0d6bff41cba6a0437a8977e3ee \

--publicurl=http://10.35.34.207:5000/v2.0 \

--internalurl=http://10.35.34.207:5000/v2.0

  • Verify Identity Service Installation
  • Unset the OS_SERVICE_TOKEN & OS_SERVICE_ENDPOINT variables
# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
  • If not unset, will face the issues while trying next command
  • Request an authentication token using the admin user and the password
# keystone --os_username=admin --os_password= --os_auth_url=http://10.35.34.207:35357/v2.0 token-get
  • Set up a keystonerc file with the admin credentials and admin endpoint
export OS_USERNAME=admin

export OS_PASSWORD=

export OS_TENANT_NAME=admin

export OS_AUTH_URL=http://10.35.34.207:35357/v2.0
  • Source this file to read in the environment variables
# source keystonerc
  • Verify the keystone identity service
# keystone token-get
  • The command returns a token and the ID of the specified tenant

Image Service Setup & Configuration

  • Install the Image Service
  • Install the Image Service on the controller node
# apt-get install glance python-glanceclient
  • Edit /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf and change the [DEFAULT] section
[DEFAULT]

....

# SQLAlchemy connection string for the reference implementation

# registry server. Any valid SQLAlchemy connection string is fine.

sql_connection = mysql://glance:@10.35.34.207/glance

....

  • Delete the glance.sqlite file created in the /var/lib/glance/ directory so that it does not get used
# rm /var/lib/glance/glance.sqlite
  • Create a glance database user
# mysql -u root -p

mysql> CREATE DATABASE glance;

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '';

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '';

mysql> quit
  • Create the database tables for the Image Service
# glance-manage db_sync
  • Define user and roles
  • set OS_SERVICE_TOKEN, as well as OS_SERVICE_ENDPOINT to specify where the Identity Service is running
# export OS_SERVICE_TOKEN=7c9a33aeae6a34ad6ff1

# export OS_SERVICE_ENDPOINT=http://10.35.34.207:35357/v2.0
  • Create a glance user that the Image Service can use to authenticate with the Identity Service
# keystone user-create --name=glance –pass=
  • Use the service tenant and give the user the admin role
# keystone user-role-add --user=glance --tenant=service –role=admin
  • Edit /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf and change the [keystone_authtoken] section
....

[keystone_authtoken]

auth_host = 10.35.34.207

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = glance

admin_password = 

Add the credentials to the /etc/glance/glance-api-paste.ini and /etc/glance/glance-registry-paste.ini files
....

[filter:authtoken]

paste.filter_factory = keystone.middleware.auth_token:filter_factory

auth_host = 10.35.34.207

delay_auth_decision = true

admin_tenant_name = service

admin_user = glance

admin_password = 

flavor=keystone

....
  • Define service and API endpoint
  • Register the Image Service with the Identity Service
# keystone service-create --name=glance --type=image --description="Glance Image Service"
  • Use the id property returned for the service to create the endpoint
# keystone endpoint-create \

--service_id=461efefebfba47df98ab9a6f8dc80502 \

--publicurl=http://10.35.34.207:9292 \

--internalurl=http://10.35.34.207:9292 \

--adminurl=http://10.35.34.207:9292

  • Restart the glance service with its new settings
# service glance-registry restart

# service glance-api restart

  • Verify Image Service Installation
  • To verify glance
# mkdir /root/images

# cd /root/images

# wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img

  • Upload the image to the Image Service
# glance image-create --name="Cirros" --disk-format=qcow2 --container-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img
  • Confirm that the image was uploaded and display its attributes
# glance image-list
  • Compute Controller Service Setup & Configuration
  • Install Compute Controller Services
  • Install the Compute packages
# apt-get install nova-novncproxy novnc nova-api nova-ajax-console-proxy nova-cert nova-conductor nova-consoleauth nova-doc nova-scheduler python-novaclient
  • Edit the /etc/nova/nova.conf file and add these lines to the [database] and [keystone_authtoken] sections
....

[database]

# The SQLAlchemy connection string used to connect to the database

connection = mysql://nova:@10.35.34.207/nova

....

....

[keystone_authtoken]

auth_host = 10.35.34.207

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = 

....
  • Configure the Compute Service to use the RabbitMQ message broker by setting these configuration keys in the [DEFAULT] configuration group of the /etc/nova/nova.conf file
....

rpc_backend = nova.rpc.impl_kombu

rabbit_host = 10.35.34.207

rabbit_password = 

....
  • Delete the nova.sqlite file created in the /var/lib/nova/ directory
# rm /var/lib/nova/nova.sqlite
  • Create a nova database user
# mysql -u root -p

mysql> CREATE DATABASE nova;

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '';

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '';

mysql> quit

  • Create the database tables for the Compute Service
# nova-manage db sync
  • Edit the /etc/nova/nova.conf file, Set the my_ip, vncserver_listen, and vncserver_proxyclient_address configuration options to the IP address of the controller node
....

my_ip=10.35.34.207

vncserver_listen=10.35.34.207

vncserver_proxyclient_address=10.35.34.207

....
  • Define user and roles
  • Create a nova user that Compute uses to authenticate with the Identity Service
# keystone user-create --name=nova –pass=
  • Use the service tenant and give the user the admin role
# keystone user-role-add --user=nova --tenant=service –role=admin
  • Edit the [DEFAULT] section in the /etc/nova/nova.conf file to add the following key
....

auth_strategy=keystone

....
  • Add the credentials to the /etc/nova/api-paste.ini file, Add these options to the [filter:authtoken] section
....

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.35.34.207

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = 

....
  • Define service and API endpoint
  • Register Compute with the Identity Service
# keystone service-create --name=nova --type=compute --description="Nova Compute service"
  • Use the id property that is returned to create the endpoint
# keystone endpoint-create \

--service-id=b1dcbb4a2f6041fc802efefb9b6cc76d \

--publicurl=http://10.35.34.207:8774/v2/%\(tenant_id\)s \

--internalurl=http://10.35.34.207:8774/v2/%\(tenant_id\)s \

--adminurl=http://10.35.34.207:8774/v2/%\(tenant_id\)s

  • Restart Compute services

# service nova-api restart

# service nova-cert restart

# service nova-consoleauth restart

# service nova-scheduler restart

# service nova-conductor restart

# service nova-novncproxy restart

  • Verify Compute Service Installation
  • To verify your configuration
# nova image-list
Will list the available images

Dashboard Setup & Configuration

  • Add the Dashboard
  • Install the dashboard on the node that can contact the Identity Service as root
# apt-get install memcached libapache2-mod-wsgi openstack-dashboard
  • Remove the openstack-dashboard-ubuntu-theme package
# apt-get remove --purge openstack-dashboard-ubuntu-theme
  • Modify the value of CACHES[‘default’][‘LOCATION’] in /etc/openstack-dashboard/local_settings.py to match the ones set in /etc/memcached.conf
  • Open /etc/openstack-dashboard/local_settings.py and look for this line
....

CACHES = {

'default': {

'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION' : '127.0.0.1:11211'

}

}

....
  • Update the ALLOWED_HOSTS in local_settings.py to include the addresses you wish to access the dashboard from
  • Edit /etc/openstack-dashboard/local_settings.py
....

ALLOWED_HOSTS = '*'

....
  • This guide assumes that you are running the Dashboard on the controller node. You can easily run the dashboard on a separate server, by changing the appropriate settings in local_settings.py
  • Edit /etc/openstack-dashboard/local_settings.py and change OPENSTACK_HOST to the hostname of your Identity Service
....

OPENSTACK_HOST = "controller"

....
  • Start the Apache web server and memcached
# service apache2 restart

# service memcached restart

Block Storage Service Setup & Configuration

  • Install a Block Storage Service Controller
  • Install the appropriate packages for the Block Storage Service on controller
# apt-get install cinder-api cinder-scheduler
  • Edit the /etc/cinder/cinder.conf file and add the following key under the [database] section
....

[database]

connection = mysql://cinder:@10.35.34.207/cinder

....
  • Create a cinder database user
# mysql -u root -p

mysql> CREATE DATABASE cinder;

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '';

mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '';

mysql> quit
  • Create the database tables for the Block Storage Service
# cinder-manage db sync
  • Define user and roles
  • Create a cinder user, the Block Storage Service uses this user to authenticate with the Identity Service
# keystone user-create --name=cinder –pass=
  • Use the service tenant and give the user the admin role
# keystone user-role-add --user=cinder --tenant=service –role=admin
  • Add the credentials to the file /etc/cinder/api-paste.ini. Open the file in a text editor and locate the section [filter:authtoken]. Set the following options
....

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.35.34.207

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = cinder

admin_password = 

....
  • Configure Block Storage to use the RabbitMQ message broker by setting these configuration keys in the [DEFAULT] configuration group of the /etc/cinder/cinder.conf file
....

rpc_backend = cinder.openstack.common.rpc.impl_kombu

rabbit_host = 10.35.34.207

rabbit_port = 5672

rabbit_userid = guest

rabbit_password = 

....
  • Define service and API endpoint
  • Register the Block Storage Service with the Identity Service
# keystone service-create --name=cinder --type=volume --description="Cinder Volume Service"
  • Use the id property returned to create the endpoint
# keystone endpoint-create \

--service-id=8d7cbda9638945f19a4d5a667adf1258 \

--publicurl=http://10.35.34.207:8776/v1/%\(tenant_id\)s \

--internalurl=http://10.35.34.207:8776/v1/%\(tenant_id\)s \

--adminurl=http://10.35.34.207:8776/v1/%\(tenant_id\)s
  • Also register a service and endpoint for version 2 of the Block Storage Service API
# keystone service-create --name=cinderv2 --type=volumev2 --description="Cinder Volume Service V2"
  • Use the id property returned to create the endpoint
# keystone endpoint-create \

--service-id=509a3c9295df4404962d599508db21c6 \

--publicurl=http://10.35.34.207:8776/v1/%\(tenant_id\)s \

--internalurl=http://10.35.34.207:8776/v1/%\(tenant_id\)s \

--adminurl=http://10.35.34.207:8776/v1/%\(tenant_id\)s

  • Restart the cinder service with its new settings
# service cinder-scheduler restart

# service cinder-api restart

  • Note : Refer the Annex 1 for a sample nova.conf for controller.

B) Setup Xen Hypervisor

Follow this link for xen hypervisor installation.

C) Nova Compute and Nova Network Node Setup and Configuration:

  • Install the following packages for Compute Node :
# apt-get install nova-compute python-guestfs
  • When prompted to create a supermin appliance, respond yes.
  • Make the current kernel readable as follows :
# dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
#!/bin/sh

version="$1"

# passing the kernel version is required

[ -z "${version}" ] && exit 0

dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}
  • Remember to make the file executable:
# chmod +x /etc/kernel/postinst.d/statoverride
  • Remove the SQLite database created by the packages:
#rm /var/lib/nova/nova.sqlite
  • Edit the /etc/nova/nova.conf configuration file and add these lines to the appropriate sections:
...

[DEFAULT]

...

auth_strategy=keystone

...

[database]

# The SQLAlchemy connection string used to connect to the database

connection = mysql://nova:@10.35.34.207/nova

Configure the Compute Service to use the RabbitMQ message broker by setting these configuration keys in the [DEFAULT] configuration group of the /etc/nova/nova.conf file:
rpc_backend = nova.rpc.impl_kombu

rabbit_host = 10.35.34.207

rabbit_password = 

Configure Compute to provide remote console access to instances. Edit /etc/nova/nova.conf and add the following keys under the [DEFAULT] section:
[DEFAULT]

...

my_ip=10.35.34.208

vnc_enabled=True

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=10.35.34.208

novncproxy_base_url=http://10.35.34.207:6080/vnc_auto.html

(Assuming that the compute node IP is 10.35.34.208 and is accessible to the Controller)

Specify the host that runs the Image Service. Edit /etc/nova/nova.conf file and add these lines to the [DEFAULT] section:
[DEFAULT]

...

glance_host=10.35.34.207

Edit the /etc/nova/api-paste.ini file to add the credentials to the [filter:authtoken] section:
[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.35.34.207

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = 

Add the following xenapi related attributes to /etc/nova/nova.conf :
[DEFAULT]

...

# Xen settings

connection_type=xenapi

compute_driver=xenapi.XenAPIDriver

xenapi_connection_url=http://10.35.34.13

xenapi_connection_username=root

xenapi_connection_password=

xenapi_vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver

xenapi_proxy_connection_url=http://10.35.34.13:8080

  • (Assumption : 10.35.34.13 is the xen-hypervisor’s ip)
  • Setup the networking for the compute node as follows :
  • Enable promiscuous mode on eth0:
# ip link set eth1 promisc on
  • Install the compute networking packages as follows:
# apt-get install nova-network nova-api-metadata
  • Edit the nova.conf file to define the networking mode:
  • Edit the /etc/nova/nova.conf file and add these lines to the [DEFAULT] section:
network_manager=nova.network.manager.FlatDHCPManager

xenapi_vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver

network_size=253

allow_same_net_traffic=False

multi_host=True

send_arp_for_ha=True

fixed_range=10.35.34.0/24

share_dhcp_address=True

force_dhcp_release=True

flat_network_bridge=xenbr0

flat_interface=eth0

public_interface=eth0

flat_injected=False

network_host=10.35.34.208

firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
  • Restart the Compute and Network service.
# service nova-compute restart

# service nova-network restart
  • Source this file to read in the environment variables
# source keystonerc

(This can be copied over from the controller server)

  • Check if python-novaclient is installed :
# dpkg –-list | grep python-novaclient.
  • Run the nova network-create command on the controller:
# nova network-create vmnet --fixed-range-v4=10.35.34.0/24 --bridge-interface=xenbr0 –multi-host=T
  • Note : Refer the Annex 2 for a sample nova.conf for compute.
  • Launch an Instance:
  • Once the setup is complete, you can launch an instance and validate the setup as follows :
  • Generate a key-pair:
$ ssh-keygen
$ cd .ssh
$ nova keypair-add --pub_key id_rsa.pub mykey
  • View available keypairs:
$ nova keypair-list
+--------+-------------------------------------------------+
| Name | Fingerprint |
+--------+-------------------------------------------------+
| mykey | b0:18:32:fa:4e:d4:3c:1b:c4:6c:dd:cb:53:29:13:82 |
+--------+-------------------------------------------------+
  • Check the default flavors that are available to you :
$ nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
  • Get ID of the image that you added earlier :
$ nova image-list

+--------------------------------------+--------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------------+--------+--------+
| 9e5c2bee-0373-414c-b4af-b91b0246ad3b | CirrOS 0.3.1 | ACTIVE | |
+--------------------------------------+--------------+--------+--------+
  • To use SSH and ping, you must configure security group rules :
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
  • Now we can launch the instance using the following syntax :
$ nova boot --flavor flavorType --key_name keypairName --image IDnewInstanceName
  • For example:
$ nova boot --flavor 1 --key_name mykey --image 9e5c2bee-0373-414c-b4af-b91b0246ad3b --security_group default cirrOS
+--------------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state | scheduling |
| image | CirrOS 0.3.1 |
| OS-EXT-STS:vm_state | building |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-SRV-USG:launched_at | None |
| flavor | m1.tiny |
| id | 3bdf98a0-c767-4247-bf41-2d147e4aa043 |
| security_groups | [{u'name': u'default'}] |
| user_id | 530166901fa24d1face95cda82cfae56 |
| OS-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
| status | BUILD |
| updated | 2013-10-10T06:47:26Z |
| hostId | |
| OS-EXT-SRV-ATTR:host | None |
| OS-SRV-USG:terminated_at | None |
| key_name | mykey |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| name | cirrOS |
| adminPass | DWCDW6FnsKNq |
| tenant_id | e66d97ac1b704897853412fc8450f7b9 |
| created | 2013-10-10T06:47:23Z |
| os-extended-volumes:volumes_attached | [] |
| metadata | {} |
+--------------------------------------+--------------------------------------+
  • As this setup is complete, we need to export this image to a file on dom0 and copy it to the USB for disklessdom0 setup to complete. Use the following steps to export it to a file:
  • Shutdown the domU (compute node) instance as follows:
# xe vm-shutdown uuid=
  • Export the image as follows :
# xe vm-export uuid= filename=/home/ compress=true
  • This will take some time to complete. Once done copy this file to the USB and gives its path in NOVA_VM_IMAGE_PATH in the config file in disklessdom0 setup.

Annex 1

  • Following is the /etc/nova/nova.conf sample for openstack controller instance :
[DEFAULT]

max_kernel_ramdisk_size=1073741824

dhcpbridge_flagfile=/etc/nova/nova.conf

dhcpbridge=/usr/bin/nova-dhcpbridge

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/var/lock/nova

force_dhcp_release=True

iscsi_helper=tgtadm

#libvirt_use_virtio_for_bridges=True

#connection_type=libvirt

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

verbose=True

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

volumes_path=/var/lib/nova/volumes

enabled_apis=metadata,ec2,osapi_compute

rpc_backend = nova.rpc.impl_kombu

rabbit_host = 

rabbit_password = 

my_ip=

vncserver_listen=

vncserver_proxyclient_address=

auth_strategy=keystone

multi_host=True

# #

[database]

# The SQLAlchemy connection string used to connect to the database

connection = mysql://nova:@/nova

[keystone_authtoken]

auth_host = 

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = 

Annexe 2

  • /etc/nova/nova.conf sample for nova-compute instance :
[DEFAULT]

max_kernel_ramdisk_size=2073741824

dhcpbridge_flagfile=/etc/nova/nova.conf

dhcpbridge=/usr/bin/nova-dhcpbridge

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/var/lock/nova

force_dhcp_release=True

iscsi_helper=tgtadm

#libvirt_use_virtio_for_bridges=True

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

verbose=True

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

volumes_path=/var/lib/nova/volumes

enabled_apis=metadata,ec2,osapi_compute

#enabled_apis=metadata

metadata_host = 

auth_strategy=keystone

rpc_backend = nova.rpc.impl_kombu

rabbit_host = 

rabbit_password = 

my_ip=

vnc_enabled=True

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=

novncproxy_base_url=http://:6080/vnc_auto.html

glance_host=

glance_api_servers=:9292

image_service=nova.image.glance.GlanceImageService

compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler

nova_url=http://:8774/v1.1/

root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

# Network

network_manager=nova.network.manager.FlatDHCPManager

xenapi_vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver

network_size=253

allow_same_net_traffic=False

multi_host=True

send_arp_for_ha=True

share_dhcp_address=True

force_dhcp_release=True

flat_network_bridge=xenbr0

flat_interface=eth1

public_interface=eth0

flat_injected=False

network_host=

firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver

#firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

[database]

connection = mysql://nova:@/nova

# Xen settings

connection_type=xenapi

compute_driver=xenapi.XenAPIDriver

xenapi_connection_url=http://

xenapi_connection_username=root

xenapi_connection_password=

xenapi_vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver

xenapi_proxy_connection_url=http://:8080

Advertisements

6 responses to “Openstack Distributed Setup

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s