2014-08-17

在Windows上的VMWare CentOS Linux上安裝OpenStack IceHouse 步驟7 Neutron安裝

這裡匯集OpenStack Neutron networking service 的安裝方法. 做完後會裝出如圖7.1 Initial Network 的簡單架構. 所有的密碼如NEUTRON_DBPASS,NEUTRON_PASS等還是照打不要改它比較方便.
7. Add a networking service
 
Figure 7.1. Initial networks

OpenStack Networking (neutron)

Modular Layer 2 (ML2) plug-in

Configure controller node

 Configure controller node

 
Prerequisites
Before you configure OpenStack Networking (neutron), you must create a database and Identity service credentials including a user and service.
  1. Connect to the database as the root user, create the neutron database, and grant the proper access to it:
    Replace NEUTRON_DBPASS with a suitable password. (照打不要改較方便)
    $ mysql -u root -p
    mysql> CREATE DATABASE neutron;
    mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    IDENTIFIED BY 'NEUTRON_DBPASS';
    mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    IDENTIFIED BY 'NEUTRON_DBPASS';
  2. Create Identity service credentials for Networking:
    1. Create the neutron user:
      Replace NEUTRON_PASS with a suitable password and neutron@example.com with a suitable e-mail address. (照打不要改較方便)
      $ keystone user-create --name neutron --pass NEUTRON_PASS --email neutron@example.com
    2. Link the neutron user to the service tenant and admin role:
      $ keystone user-role-add --user neutron --tenant service --role admin
    3. Create the neutron service:
      $ keystone service-create --name neutron --type network --description "OpenStack Networking"
    4. Create the service endpoint:
      $ keystone endpoint-create \
        --service-id $(keystone service-list | awk '/ network / {print $2}') \
        --publicurl http://controller:9696 \
        --adminurl http://controller:9696 \
        --internalurl http://controller:9696
 
To install the Networking components
  • # yum install openstack-neutron openstack-neutron-ml2 python-neutronclient
 
To configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message broker, topology change notifier, and plug-in.
  1. Configure Networking to use the database:
    Replace NEUTRON_DBPASS with a suitable password.(照打不要改較方便)
    # openstack-config --set /etc/neutron/neutron.conf database connection \
      mysql://neutron:NEUTRON_DBPASS@controller/neutron
  2. Configure Networking to use the Identity service for authentication:
    Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      auth_strategy keystone
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_uri http://controller:5000
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_host controller
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_protocol http
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_port 35357
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_tenant_name service
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_user neutron
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_password NEUTRON_PASS
  3. Configure Networking to use the message broker:
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      rpc_backend neutron.openstack.common.rpc.impl_qpid
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      qpid_hostname controller
  4. Configure Networking to notify Compute about network topology changes:
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      notify_nova_on_port_status_changes True
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      notify_nova_on_port_data_changes True
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      nova_url http://controller:8774/v2
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      nova_admin_username nova
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      nova_admin_password NOVA_PASS
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      nova_admin_auth_url http://controller:35357/v2.0
  5. Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      core_plugin ml2
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      service_plugins router
    [Note] Note
    We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
  6. Comment out any lines in the [service_providers] section.
 
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS agent or service because it does not handle instance network traffic.
  • Run the following commands:
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      type_drivers gre
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      tenant_network_types gre
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      mechanism_drivers openvswitch
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \
      tunnel_id_ranges 1:1000
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
      firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
      enable_security_group True
 
To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
  • Run the following commands:
    Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.(照打不要改較方便)
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      network_api_class nova.network.neutronv2.api.API
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_url http://controller:9696
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_auth_strategy keystone
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_admin_tenant_name service
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_admin_username neutron
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_admin_password NEUTRON_PASS
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_admin_auth_url http://controller:35357/v2.0
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      firewall_driver nova.virt.firewall.NoopFirewallDriver
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      security_group_api neutron
    [Note] Note
    By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
 
To finalize installation
  1. The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using ML2, for example, the symbolic link must point to /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands:
    # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
  2. Restart the Compute services:
    # service openstack-nova-api restart
    # service openstack-nova-scheduler restart
    # service openstack-nova-conductor restart
  3. Start the Networking service and configure it to start when the system boots:
    # service neutron-server start
    # chkconfig neutron-server on

Configure network node

 
Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.
  1. Edit /etc/sysctl.conf to contain the following:
    net.ipv4.ip_forward=1
    net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.default.rp_filter=0
  2. Implement the changes:
    # sysctl -p
 
To install the Networking components
  • # yum install openstack-neutron openstack-neutron-ml2 \
      openstack-neutron-openvswitch
 
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
  1. Configure Networking to use the Identity service for authentication:
    Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.(照打不要改較方便)
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      auth_strategy keystone
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_uri http://controller:5000
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_host controller
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_protocol http
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_port 35357
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_tenant_name service
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_user neutron
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_password NEUTRON_PASS
  2. Configure Networking to use the message broker:
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      rpc_backend neutron.openstack.common.rpc.impl_qpid
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      qpid_hostname controller
  3. Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      core_plugin ml2
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      service_plugins router
    [Note] Note
    We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
  4. Comment out any lines in the [service_providers] section.
 
To configure the Layer-3 (L3) agent
The Layer-3 (L3) agent provides routing services for instance virtual networks.
  • Run the following commands:
    # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \
      interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
    # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \
      use_namespaces True
    [Note] Note
    We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with troubleshooting.
 
To configure the DHCP agent
The DHCP agent provides DHCP services for instance virtual networks.
  • Run the following commands:
    # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \
      interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
    # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \
      dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
    # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \
      use_namespaces True
    [Note] Note
    We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/dhcp_agent.ini to assist with troubleshooting.
 
To configure the metadata agent
The metadata agent provides configuration information such as credentials for remote access to instances.
  1. Run the following commands:
    Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service. Replace METADATA_SECRET with a suitable secret for the metadata proxy.(照打不要改較方便)
    # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
      auth_url http://controller:5000/v2.0
    # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
      auth_region regionOne
    # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
      admin_tenant_name service
    # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
      admin_user neutron
    # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
      admin_password NEUTRON_PASS
    # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
      nova_metadata_ip controller
    # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
      metadata_proxy_shared_secret METADATA_SECRET
    [Note] Note
    We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with troubleshooting.
  2. [Note] Note
    Perform the next two steps on the controller node.
  3. On the controller node, configure Compute to use the metadata service:
    Replace METADATA_SECRET with the secret you chose for the metadata proxy.(照打不要改較方便)
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      service_neutron_metadata_proxy true
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_metadata_proxy_shared_secret METADATA_SECRET
  4. On the controller node, restart the Compute API service:
    # service openstack-nova-api restart
 
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build virtual networking framework for instances.
  • Run the following commands:
    Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface on the network node.
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      type_drivers gre
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      tenant_network_types gre
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      mechanism_drivers openvswitch
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \
      tunnel_id_ranges 1:1000
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
      local_ip INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
      tunnel_type gre
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
      enable_tunneling True
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
      firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
      enable_security_group True
 
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS. The external bridge br-ext handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port bridges the virtual and physical external networks in your environment.
  1. Start the OVS service and configure it to start when the system boots:
    # service openvswitch start
    # chkconfig openvswitch on
  2. Add the integration bridge:
    # ovs-vsctl add-br br-int
  3. Add the external bridge:
    # ovs-vsctl add-br br-ex
  4. Add a port to the external bridge that connects to the physical external network interface:
    Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
    # ovs-vsctl add-port br-ex INTERFACE_NAME
    [Note] Note
    Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network.
    To temporarily disable GRO on the external network interface while testing your environment:
    # ethtool -K INTERFACE_NAME gro off
 
To finalize the installation
  1. The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using the ML2 plug-in, for example, the symbolic link must point to /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands:
    # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
    # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig
    # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
  2. Start the Networking services and configure them to start when the system boots:
    # service neutron-openvswitch-agent start
    # service neutron-l3-agent start
    # service neutron-dhcp-agent start
    # service neutron-metadata-agent start
    # chkconfig neutron-openvswitch-agent on
    # chkconfig neutron-l3-agent on
    # chkconfig neutron-dhcp-agent on
    # chkconfig neutron-metadata-agent on

Configure compute node

 
Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.
  1. Edit /etc/sysctl.conf to contain the following:
    net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.default.rp_filter=0
  2. Implement the changes:
    # sysctl -p
 
To install the Networking components
  • # yum install openstack-neutron-ml2 openstack-neutron-openvswitch
 
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
  1. Configure Networking to use the Identity service for authentication:
    Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.(照打不要改較方便)
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      auth_strategy keystone
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_uri http://controller:5000
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_host controller
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_protocol http
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      auth_port 35357
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_tenant_name service
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_user neutron
    # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
      admin_password NEUTRON_PASS
  2. Configure Networking to use the message broker:
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      rpc_backend neutron.openstack.common.rpc.impl_qpid
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      qpid_hostname controller
  3. Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      core_plugin ml2
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
      service_plugins router
    [Note] Note
    We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
  4. Comment out any lines in the [service_providers] section.
 
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
  • Run the following commands:
    Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node.
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      type_drivers gre
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      tenant_network_types gre
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
      mechanism_drivers openvswitch
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \
      tunnel_id_ranges 1:1000
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
      local_ip INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
      tunnel_type gre
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
      enable_tunneling True
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
      firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
      enable_security_group True
 
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS.
  1. Start the OVS service and configure it to start when the system boots:
    # service openvswitch start
    # chkconfig openvswitch on
  2. Add the integration bridge:
    # ovs-vsctl add-br br-int
 
To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
  • Run the following commands:
    Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.(照打不要改較方便)
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      network_api_class nova.network.neutronv2.api.API
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_url http://controller:9696
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_auth_strategy keystone
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_admin_tenant_name service
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_admin_username neutron
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_admin_password NEUTRON_PASS
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      neutron_admin_auth_url http://controller:35357/v2.0
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      firewall_driver nova.virt.firewall.NoopFirewallDriver
    # openstack-config --set /etc/nova/nova.conf DEFAULT \
      security_group_api neutron
    [Note] Note
    By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
 
To finalize the installation
  1. The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the configuration file associated with your chosen plug-in. Using the ML2 plug-in, for example, the symbolic link must point to /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands:
    # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
    # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig
    # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
  2. Restart the Compute service:
    # service openstack-nova-compute restart
  3. Start the Open vSwitch (OVS) agent and configure it to start when the system boots:
    # service neutron-openvswitch-agent start
    # chkconfig neutron-openvswitch-agent on

 Create initial networks

Before launching your first instance, you must create the necessary virtual network infrastructure to which the instance will connect, including the external network and tenant network. See Figure 7.1, “Initial networks”. After creating this infrastructure, we recommend that you verify connectivity and resolve any issues before proceeding further.

 External network
The external network typically provides internet access for your instances. By default, this network only allows internet access from instances using Network Address Translation (NAT). You can enable internet access to individual instances using a floating IP address and suitable security group rules. The admin tenant owns this network because it provides external network access for multiple tenants. You must also enable sharing to allow access by those tenants.
[Note] Note
Perform these commands on the controller node.
 
To create the external network
  1. Source the admin tenant credentials:
    $ source admin-openrc.sh
  2. Create the network:
    $ neutron net-create ext-net --shared --router:external=True
    Created a new network:
    +---------------------------+--------------------------------------+
    | Field                     | Value                                |
    +---------------------------+--------------------------------------+
    | admin_state_up            | True                                 |
    | id                        | 893aebb9-1c1e-48be-8908-6b947f3237b3 |
    | name                      | ext-net                              |
    | provider:network_type     | gre                                  |
    | provider:physical_network |                                      |
    | provider:segmentation_id  | 1                                    |
    | router:external           | True                                 |
    | shared                    | True                                 |
    | status                    | ACTIVE                               |
    | subnets                   |                                      |
    | tenant_id                 | 54cd044c64d5408b83f843d63624e0d8     |
    +---------------------------+--------------------------------------+
Like a physical network, a virtual network requires a subnet assigned to it. The external network shares the same subnet and gateway associated with the physical network connected to the external interface on the network node. You should specify an exclusive slice of this subnet for router and floating IP addresses to prevent interference with other devices on the external network.
Replace FLOATING_IP_START and FLOATING_IP_END with the first and last IP addresses of the range that you want to allocate for floating IP addresses. Replace EXTERNAL_NETWORK_CIDR with the subnet associated with the physical network. Replace EXTERNAL_NETWORK_GATEWAY with the gateway associated with the physical network, typically the ".1" IP address. You should disable DHCP on this subnet because instances do not connect directly to the external network and floating IP addresses require manual assignment.
 
To create a subnet on the external network
  • Create the subnet:
    $ neutron subnet-create ext-net --name ext-subnet \
      --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \
      --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAYEXTERNAL_NETWORK_CIDR
    For example, using 203.0.113.0/24 with floating IP address range 203.0.113.101 to 203.0.113.200:
    $ neutron subnet-create ext-net --name ext-subnet \
      --allocation-pool start=203.0.113.101,end=203.0.113.200 \
      --disable-dhcp --gateway 203.0.113.1 203.0.113.0/24
    Created a new subnet:
    +-------------------+------------------------------------------------------+
    | Field             | Value                                                |
    +-------------------+------------------------------------------------------+
    | allocation_pools  | {"start": "203.0.113.101", "end": "203.0.113.200"}   |
    | cidr              | 203.0.113.0/24                                       |
    | dns_nameservers   |                                                      |
    | enable_dhcp       | False                                                |
    | gateway_ip        | 203.0.113.1                                          |
    | host_routes       |                                                      |
    | id                | 9159f0dc-2b63-41cf-bd7a-289309da1391                 |
    | ip_version        | 4                                                    |
    | ipv6_address_mode |                                                      |
    | ipv6_ra_mode      |                                                      |
    | name              | ext-subnet                                           |
    | network_id        | 893aebb9-1c1e-48be-8908-6b947f3237b3                 |
    | tenant_id         | 54cd044c64d5408b83f843d63624e0d8                     |
    +-------------------+------------------------------------------------------+
 Tenant network
The tenant network provides internal network access for instances. The architecture isolates this type of network from other tenants. The demo tenant owns this network because it only provides network access for instances within it.
[Note] Note
Perform these commands on the controller node.
 
To create the tenant network
  1. Source the demo tenant credentials:
    $ source demo-openrc.sh
  2. Create the network:
    $ neutron net-create demo-net
    Created a new network:
    +----------------+--------------------------------------+
    | Field          | Value                                |
    +----------------+--------------------------------------+
    | admin_state_up | True                                 |
    | id             | ac108952-6096-4243-adf4-bb6615b3de28 |
    | name           | demo-net                             |
    | shared         | False                                |
    | status         | ACTIVE                               |
    | subnets        |                                      |
    | tenant_id      | cdef0071a0194d19ac6bb63802dc9bae     |
    +----------------+--------------------------------------+
Like the external network, your tenant network also requires a subnet attached to it. You can specify any valid subnet because the architecture isolates tenant networks. Replace TENANT_NETWORK_CIDR with the subnet you want to associate with the tenant network. Replace TENANT_NETWORK_GATEWAY with the gateway you want to associate with this network, typically the ".1" IP address. By default, this subnet will use DHCP so your instances can obtain IP addresses.
 
To create a subnet on the tenant network
  • Create the subnet:
    $ neutron subnet-create demo-net --name demo-subnet \
      --gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR
    Example using 192.168.1.0/24:
    $ neutron subnet-create demo-net --name demo-subnet \
      --gateway 192.168.1.1 192.168.1.0/24
    Created a new subnet:
    +-------------------+------------------------------------------------------+
    | Field             | Value                                                |
    +-------------------+------------------------------------------------------+
    | allocation_pools  | {"start": "192.168.1.2", "end": "192.168.1.254"}     |
    | cidr              | 192.168.1.0/24                                       |
    | dns_nameservers   |                                                      |
    | enable_dhcp       | True                                                 |
    | gateway_ip        | 192.168.1.1                                          |
    | host_routes       |                                                      |
    | id                | 69d38773-794a-4e49-b887-6de6734e792d                 |
    | ip_version        | 4                                                    |
    | ipv6_address_mode |                                                      |
    | ipv6_ra_mode      |                                                      |
    | name              | demo-subnet                                          |
    | network_id        | ac108952-6096-4243-adf4-bb6615b3de28                 |
    | tenant_id         | cdef0071a0194d19ac6bb63802dc9bae                     |
    +-------------------+------------------------------------------------------+
A virtual router passes network traffic between two or more virtual networks. Each router requires one or more interfaces and/or gateways that provide access to specific networks. In this case, you will create a router and attach your tenant and external networks to it.
 
To create a router on the tenant network and attach the external and tenant networks to it
  1. Create the router:
    $ neutron router-create demo-router
    Created a new router:
    +-----------------------+--------------------------------------+
    | Field                 | Value                                |
    +-----------------------+--------------------------------------+
    | admin_state_up        | True                                 |
    | external_gateway_info |                                      |
    | id                    | 635660ae-a254-4feb-8993-295aa9ec6418 |
    | name                  | demo-router                          |
    | status                | ACTIVE                               |
    | tenant_id             | cdef0071a0194d19ac6bb63802dc9bae     |
    +-----------------------+--------------------------------------+
  2. Attach the router to the demo tenant subnet:
    $ neutron router-interface-add demo-router demo-subnet
    Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router.
  3. Attach the router to the external network by setting it as the gateway:
    $ neutron router-gateway-set demo-router ext-net
    Set gateway for router demo-router
 Verify connectivity
We recommend that you verify network connectivity and resolve any issues before proceeding further. Following the external network subnet example using 203.0.113.0/24, the tenant router gateway should occupy the lowest IP address in the floating IP address range, 203.0.113.101. If you configured your external physical network and virtual networks correctly, you you should be able to ping this IP address from any host on your external physical network.
[Note] Note
If you are building your OpenStack nodes as virtual machines, you must configure the hypervisor to permit promiscuous mode on the external network.
 
To verify network connectivity
  • Ping the tenant router gateway:
    $ ping -c 4 203.0.113.101
    PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data.
    64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms
    64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms
    64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms
    64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms
    
    --- 203.0.113.101 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 2999ms
    rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms

沒有留言:

張貼留言