NEUTRON_DBPASS,NEUTRON_PASS等還是照打不要改它比較方便.7. Add a networking service
OpenStack Networking (neutron)
Modular Layer 2 (ML2) plug-in
Configure controller node
Prerequisites
Before you configure OpenStack Networking (neutron), you must create a database and Identity service credentials including a user and service.-
Connect to the database as the root user, create the
neutrondatabase, and grant the proper access to it:
ReplaceNEUTRON_DBPASSwith a suitable password. (照打不要改較方便)
$ mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY '
NEUTRON_DBPASS'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'NEUTRON_DBPASS'; -
Create Identity service credentials for Networking:
-
Create the
neutronuser:
ReplaceNEUTRON_PASSwith a suitable password andneutron@example.comwith a suitable e-mail address. (照打不要改較方便)
$ keystone user-create --name neutron --pass
NEUTRON_PASS--emailneutron@example.com -
Link the
neutronuser to theservicetenant andadminrole:
$ keystone user-role-add --user neutron --tenant service --role admin
-
Create the
neutronservice:
$ keystone service-create --name neutron --type network --description "OpenStack Networking"
-
Create the service endpoint:
$ keystone endpoint-create \ --service-id $(keystone service-list | awk '/ network / {print $2}') \ --publicurl http://controller:9696 \ --adminurl http://controller:9696 \ --internalurl http://controller:9696
-
Create the
To install the Networking components
-
# yum install openstack-neutron openstack-neutron-ml2 python-neutronclient
To configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message broker, topology change notifier, and plug-in.-
Configure Networking to use the database:
ReplaceNEUTRON_DBPASSwith a suitable password.(照打不要改較方便)
# openstack-config --set /etc/neutron/neutron.conf database connection \ mysql://neutron:
NEUTRON_DBPASS@controller/neutron -
Configure Networking to use the Identity service for authentication:
ReplaceNEUTRON_PASSwith the password you chose for theneutronuser in the Identity service.
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://
controller:5000 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_hostcontroller# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_passwordNEUTRON_PASS -
Configure Networking to use the message broker:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname
controller -
Configure Networking to notify Compute about network topology changes:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ notify_nova_on_port_status_changes True # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ notify_nova_on_port_data_changes True # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_url http://
controller:8774/v2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_admin_username nova # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }') # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_admin_passwordNOVA_PASS# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ nova_admin_auth_url http://controller:35357/v2.0 -
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router
Note We recommend adding verbose = Trueto the[DEFAULT]section in/etc/neutron/neutron.confto assist with troubleshooting. -
Comment out any lines in the
[service_providers]section.
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS agent or service because it does not handle instance network traffic.-
Run the following commands:
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.-
Run the following commands:
ReplaceNEUTRON_PASSwith the password you chose for theneutronuser in the Identity service.(照打不要改較方便)
# openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://
controller:9696 # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_passwordNEUTRON_PASS# openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 # openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutronNote By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriverfirewall driver.
To finalize installation
-
The Networking service initialization scripts expect a symbolic link
/etc/neutron/plugin.inipointing to the configuration file associated with your chosen plug-in. Using ML2, for example, the symbolic link must point to/etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands:
# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
-
Restart the Compute services:
# service openstack-nova-api restart # service openstack-nova-scheduler restart # service openstack-nova-conductor restart
-
Start the Networking service and configure it to start when the system boots:
# service neutron-server start # chkconfig neutron-server on
Configure network node
Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.-
Edit
/etc/sysctl.confto contain the following:
net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
-
Implement the changes:
# sysctl -p
To install the Networking components
-
# yum install openstack-neutron openstack-neutron-ml2 \ openstack-neutron-openvswitch
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.-
Configure Networking to use the Identity service for authentication:
ReplaceNEUTRON_PASSwith the password you chose for theneutronuser in the Identity service.(照打不要改較方便)
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://
controller:5000 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_hostcontroller# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_passwordNEUTRON_PASS -
Configure Networking to use the message broker:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname
controller -
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router
Note We recommend adding verbose = Trueto the[DEFAULT]section in/etc/neutron/neutron.confto assist with troubleshooting. -
Comment out any lines in the
[service_providers]section.
To configure the Layer-3 (L3) agent
The Layer-3 (L3) agent provides routing services for instance virtual networks.-
Run the following commands:
# openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ use_namespaces True
Note We recommend adding verbose = Trueto the[DEFAULT]section in/etc/neutron/l3_agent.inito assist with troubleshooting.
To configure the DHCP agent
The DHCP agent provides DHCP services for instance virtual networks.-
Run the following commands:
# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ dhcp_driver neutron.agent.linux.dhcp.Dnsmasq # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ use_namespaces True
Note We recommend adding verbose = Trueto the[DEFAULT]section in/etc/neutron/dhcp_agent.inito assist with troubleshooting.
To configure the metadata agent
The metadata agent provides configuration information such as credentials for remote access to instances.-
Run the following commands:
ReplaceNEUTRON_PASSwith the password you chose for theneutronuser in the Identity service. ReplaceMETADATA_SECRETwith a suitable secret for the metadata proxy.(照打不要改較方便)
# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_url http://
controller:5000/v2.0 # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ auth_region regionOne # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_tenant_name service # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_user neutron # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ admin_passwordNEUTRON_PASS# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ nova_metadata_ipcontroller# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \ metadata_proxy_shared_secretMETADATA_SECRETNote We recommend adding verbose = Trueto the[DEFAULT]section in/etc/neutron/metadata_agent.inito assist with troubleshooting. -
Note Perform the next two steps on the controller node. -
On the controller node, configure Compute to use the metadata service:
ReplaceMETADATA_SECRETwith the secret you chose for the metadata proxy.(照打不要改較方便)
# openstack-config --set /etc/nova/nova.conf DEFAULT \ service_neutron_metadata_proxy true # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_metadata_proxy_shared_secret
METADATA_SECRET -
On the controller node, restart the Compute API service:
# service openstack-nova-api restart
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build virtual networking framework for instances.-
Run the following commands:
ReplaceINSTANCE_TUNNELS_INTERFACE_IP_ADDRESSwith the IP address of the instance tunnels network interface on your network node. This guide uses10.0.1.21for the IP address of the instance tunnels network interface on the network node.
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ local_ip
INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ tunnel_type gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ enable_tunneling True # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS. The external bridge br-ext handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port bridges the virtual and physical external networks in your environment.-
Start the OVS service and configure it to start when the system boots:
# service openvswitch start # chkconfig openvswitch on
-
Add the integration bridge:
# ovs-vsctl add-br br-int
-
Add the external bridge:
# ovs-vsctl add-br br-ex
-
Add a port to the external bridge that connects to the physical external network interface:
ReplaceINTERFACE_NAMEwith the actual interface name. For example, eth2 or ens256.
# ovs-vsctl add-port br-ex
INTERFACE_NAMENote Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network.
To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K
INTERFACE_NAMEgro off
To finalize the installation
-
The Networking service initialization scripts expect a symbolic link
/etc/neutron/plugin.inipointing to the configuration file associated with your chosen plug-in. Using the ML2 plug-in, for example, the symbolic link must point to/etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands:
# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link/etc/neutron/plugin.inipointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
-
Start the Networking services and configure them to start when the system boots:
# service neutron-openvswitch-agent start # service neutron-l3-agent start # service neutron-dhcp-agent start # service neutron-metadata-agent start # chkconfig neutron-openvswitch-agent on # chkconfig neutron-l3-agent on # chkconfig neutron-dhcp-agent on # chkconfig neutron-metadata-agent on
Configure compute node
Prerequisites
Before you configure OpenStack Networking, you must enable certain kernel networking functions.-
Edit
/etc/sysctl.confto contain the following:
net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
-
Implement the changes:
# sysctl -p
To install the Networking components
-
# yum install openstack-neutron-ml2 openstack-neutron-openvswitch
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.-
Configure Networking to use the Identity service for authentication:
ReplaceNEUTRON_PASSwith the password you chose for theneutronuser in the Identity service.(照打不要改較方便)
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ auth_strategy keystone # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_uri http://
controller:5000 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_hostcontroller# openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_passwordNEUTRON_PASS -
Configure Networking to use the message broker:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname
controller -
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin ml2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ service_plugins router
Note We recommend adding verbose = Trueto the[DEFAULT]section in/etc/neutron/neutron.confto assist with troubleshooting. -
Comment out any lines in the
[service_providers]section.
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.-
Run the following commands:
ReplaceINSTANCE_TUNNELS_INTERFACE_IP_ADDRESSwith the IP address of the instance tunnels network interface on your compute node. This guide uses10.0.1.31for the IP address of the instance tunnels network interface on the first compute node.
# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ type_drivers gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ tenant_network_types gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \ mechanism_drivers openvswitch # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \ tunnel_id_ranges 1:1000 # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ local_ip
INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ tunnel_type gre # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \ enable_tunneling True # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \ enable_security_group True
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS.-
Start the OVS service and configure it to start when the system boots:
# service openvswitch start # chkconfig openvswitch on
-
Add the integration bridge:
# ovs-vsctl add-br br-int
To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.-
Run the following commands:
ReplaceNEUTRON_PASSwith the password you chose for theneutronuser in the Identity service.(照打不要改較方便)
# openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://
controller:9696 # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_passwordNEUTRON_PASS# openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 # openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver # openstack-config --set /etc/nova/nova.conf DEFAULT \ security_group_api neutronNote By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriverfirewall driver.
To finalize the installation
-
The Networking service initialization scripts expect a symbolic link
/etc/neutron/plugin.inipointing to the configuration file associated with your chosen plug-in. Using the ML2 plug-in, for example, the symbolic link must point to/etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following commands:
# ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link/etc/neutron/plugin.inipointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
# cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
-
Restart the Compute service:
# service openstack-nova-compute restart
-
Start the Open vSwitch (OVS) agent and configure it to start when the system boots:
# service neutron-openvswitch-agent start # chkconfig neutron-openvswitch-agent on
The external network typically provides internet access for your instances. By default, this network only allows internet access from instances using Network Address Translation (NAT). You can enable internet access to individual instances using a floating IP address and suitable security group rules. The
Like a physical network, a virtual network requires a subnet assigned to it. The external network shares the same subnet and gateway associated with the physical network connected to the external interface on the network node. You should specify an exclusive slice of this subnet for router and floating IP addresses to prevent interference with other devices on the external network.
Replace
admin tenant owns this network because it provides external network access for multiple tenants. You must also enable sharing to allow access by those tenants.| Note | |
|---|---|
| Perform these commands on the controller node. |
To create the external network
-
Source the
admintenant credentials:
$ source admin-openrc.sh
-
Create the network:
$ neutron net-create ext-net --shared --router:external=True Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | | name | ext-net | | provider:network_type | gre | | provider:physical_network | | | provider:segmentation_id | 1 | | router:external | True | | shared | True | | status | ACTIVE | | subnets | | | tenant_id | 54cd044c64d5408b83f843d63624e0d8 | +---------------------------+--------------------------------------+
Replace
FLOATING_IP_START and FLOATING_IP_END with the first and last IP addresses of the range that you want to allocate for floating IP addresses. Replace EXTERNAL_NETWORK_CIDR with the subnet associated with the physical network. Replace EXTERNAL_NETWORK_GATEWAY with the gateway associated with the physical network, typically the ".1" IP address. You should disable DHCP on this subnet because instances do not connect directly to the external network and floating IP addresses require manual assignment.
To create a subnet on the external network
-
Create the subnet:
$ neutron subnet-create ext-net --name ext-subnet \ --allocation-pool start=
For example, usingFLOATING_IP_START,end=FLOATING_IP_END\ --disable-dhcp --gatewayEXTERNAL_NETWORK_GATEWAYEXTERNAL_NETWORK_CIDR203.0.113.0/24with floating IP address range203.0.113.101to203.0.113.200:
$ neutron subnet-create ext-net --name ext-subnet \ --allocation-pool start=203.0.113.101,end=203.0.113.200 \ --disable-dhcp --gateway 203.0.113.1 203.0.113.0/24 Created a new subnet: +-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | {"start": "203.0.113.101", "end": "203.0.113.200"} | | cidr | 203.0.113.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 203.0.113.1 | | host_routes | | | id | 9159f0dc-2b63-41cf-bd7a-289309da1391 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | ext-subnet | | network_id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | | tenant_id | 54cd044c64d5408b83f843d63624e0d8 | +-------------------+------------------------------------------------------+
The tenant network provides internal network access for instances. The architecture isolates this type of network from other tenants. The
Like the external network, your tenant network also requires a subnet attached to it. You can specify any valid subnet because the architecture isolates tenant networks. Replace
A virtual router passes network traffic between two or more virtual networks. Each router requires one or more interfaces and/or gateways that provide access to specific networks. In this case, you will create a router and attach your tenant and external networks to it.
demo tenant owns this network because it only provides network access for instances within it.| Note | |
|---|---|
| Perform these commands on the controller node. |
To create the tenant network
-
Source the
demotenant credentials:
$ source demo-openrc.sh
-
Create the network:
$ neutron net-create demo-net Created a new network: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | admin_state_up | True | | id | ac108952-6096-4243-adf4-bb6615b3de28 | | name | demo-net | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +----------------+--------------------------------------+
TENANT_NETWORK_CIDR with the subnet you want to associate with the tenant network. Replace TENANT_NETWORK_GATEWAY with the gateway you want to associate with this network, typically the ".1" IP address. By default, this subnet will use DHCP so your instances can obtain IP addresses.
To create a subnet on the tenant network
-
Create the subnet:
$ neutron subnet-create demo-net --name demo-subnet \ --gateway
Example usingTENANT_NETWORK_GATEWAYTENANT_NETWORK_CIDR192.168.1.0/24:
$ neutron subnet-create demo-net --name demo-subnet \ --gateway 192.168.1.1 192.168.1.0/24 Created a new subnet: +-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | | cidr | 192.168.1.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 192.168.1.1 | | host_routes | | | id | 69d38773-794a-4e49-b887-6de6734e792d | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | demo-subnet | | network_id | ac108952-6096-4243-adf4-bb6615b3de28 | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +-------------------+------------------------------------------------------+
To create a router on the tenant network and attach the external and tenant networks to it
-
Create the router:
$ neutron router-create demo-router Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 635660ae-a254-4feb-8993-295aa9ec6418 | | name | demo-router | | status | ACTIVE | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +-----------------------+--------------------------------------+
-
Attach the router to the
demotenant subnet:
$ neutron router-interface-add demo-router demo-subnet Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router.
-
Attach the router to the external network by setting it as the gateway:
$ neutron router-gateway-set demo-router ext-net Set gateway for router demo-router
We recommend that you verify network connectivity and resolve any issues before proceeding further. Following the external network subnet example using
203.0.113.0/24, the tenant router gateway should occupy the lowest IP address in the floating IP address range, 203.0.113.101. If you configured your external physical network and virtual networks correctly, you you should be able to ping this IP address from any host on your external physical network.| Note | |
|---|---|
| If you are building your OpenStack nodes as virtual machines, you must configure the hypervisor to permit promiscuous mode on the external network. |
To verify network connectivity
-
Ping the tenant router gateway:
$ ping -c 4 203.0.113.101 PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data. 64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619 ms 64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189 ms 64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165 ms 64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216 ms --- 203.0.113.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2999ms rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms
沒有留言:
張貼留言