
国产化openEuler 22.09搭建openstack Yoga
更适合新一代年轻人的教程,简单快捷,明了,犹如胎教一般详细!!!
目录
openEuler搭建openstack平台
本案例讲如何使用国产化openEuler系统搭建openstack平台,由于现在正在大力发展国产化,所有我们要跟随时代的步伐,才能把握机会!!!Openstack平台现在非常常用。
案例准备
1.1规划节点
创建三台虚拟机,并按照自己电脑的网段配置其IP地址,建议将三台机器的root密码设置为相同密码
1.1 节点规划见表1-1。
表1-1 节点规划
NET网卡ip
IP |
主机名 |
节点 |
192.168.200.150 |
Controller |
控制节点 |
192.168.200.151 |
Compute |
计算节点 |
192.168.200.152 |
storage |
存储节点 |
1.2.基础准备
安装三台虚拟机,第一台机器建议配置稍微给高一点,我这里的配置是第一台机器4vcpu、10G内存、100G磁盘,第二机器2vcpu、4G内存、100G磁盘。我是因为电脑是16运行内存的,所有这样给,你们可以根据你们电脑配置酌情更改
快照! 关防火墙!
以下文档中所有ip使用你们自己配的ip(文档的IP是我自己的)!
任务实施
- 基础配置
2.1 yum源配置
打开/etc/yum.repos.d/openEuler.repo文件,将里面的内容替换为如下内容:
[root@controller ~]# vi /etc/yum.repos.d/openEuler.repo
#generic-repos is licensed under the Mulan PSL v2.
#You can use this software according to the terms and conditions of the Mulan PSL v2.
#You may obtain a copy of Mulan PSL v2 at:
# http://license.coscl.org.cn/MulanPSL2
#THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR
#PURPOSE.
#See the Mulan PSL v2 for more details.
[OS]
name=OS
baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler
[everything]
name=everything
baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/everything/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/everything/$basearch/RPM-GPG-KEY-openEuler
[EPOL]
name=EPOL
baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/EPOL/main/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler
[debuginfo]
name=debuginfo
baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/debuginfo/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/debuginfo/$basearch/RPM-GPG-KEY-openEuler
[source]
name=source
baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/source/
enabled=1
gpgcheck=1
gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/source/RPM-GPG-KEY-openEuler
[update]
name=update
baseurl=https://archives.openeuler.openatom.cn/openEuler-22.09/update/$basearch/
enabled=1
gpgcheck=1
gpgkey=https://archives.openeuler.openatom.cn/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler
更改完配置文件后,更新yum源
[root@controller ~]# yum clean all
[root@controller ~]# yum makecache
[root@controller ~]# yum update
2.2 修改主机名与映射文件
接下来,修改主机名以及映射,每个节点修改为对应的主机名,这里以controller为例
[root@localhost ~]# hostnamectl set-hostname controller
[root@localhost ~]# bash
然后修改每个节点的/etc/hosts文件,新增如下内容:
[root@storage ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.150 controller
192.168.200.151 compute
192.168.200.152 storage
2.3 时间同步
集群要求每个节点的时间一致,一般由时钟同步软件保证,这里使用chrony
软件。
Controller节点:
(1)关闭防火墙 三台机器都需要执行
[root@controller ~]# systemctl stop firewalld
[root@controller ~]# systemctl disable firewalld
[root@controller ~]# setenforce 0
(2)安装服务
[root@controller ~]# dnf install chrony
(3)修改/etc/chrony.conf配置文件,新增一行
allow 192.168.200.0/24 # 表示允许哪些IP从本节点同步时钟
(4)重启服务
[root@controller ~]# systemctl restart chronyd
其他节点
- 安装服务
[root@compute ~]# dnf install chrony
- 修改/etc/chrony.conf配置文件,新增两行
allow 192.168.200.0/24
server 192.168.200.150 iburst
同时把pool pool.ntp.org iburst这一行注释掉,表示不从公网同步时钟。
- 重启服务
systemctl restart chronyd
配置完成后,检查一下结果,在其他非controller节点执行chronyc sources,返回结果类似如下内容,表示成功从controller同步时钟。
[root@compute ~]# chronyc sources
2.4 安装数据库
数据库安装在控制节点,这里推荐使用mariadb。
(1)安装软件包
[root@controller ~]# dnf install mysql-config mariadb mariadb-server python3-PyMySQL
(2)新增配置文件/etc/my.cnf.d/openstack.cnf,内容如下
[root@controller ~]# cat /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.200.150
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
(3)启动服务器
[root@controller ~]# systemctl start mariadb
(4)初始化数据库,根据提示进行即可
[root@controller ~]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
haven't set the root password yet, you should just press enter here.
Enter current password for root (enter for none):
#这里是输入密码,由于我们是初始化DB,直接回车就行。
OK, successfully used password, moving on...
Setting the root password or using the unix_socket ensures that nobody
can log into the MariaDB root user without the proper authorisation.
You already have your root account protected, so you can safely answer 'n'.
Switch to unix_socket authentication [Y/n] n #这里输入N
... skipping.
You already have your root account protected, so you can safely answer 'n'.
Change the root password? [Y/n] y #这里输入Y(配置数据库密码,建议:000000)
New password: 你的密码
Re-enter new password: 重复你的密码
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y #这里输入Y,删除匿名用户
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y #这里输入Y,关闭root远程登录权限
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y #这里输入Y,删除test数据库
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y #这里输入Y,重新加载配置
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
(5)验证,根据第四步设置的密码,检查能否登录mariadb
[root@controller ~]# mysql -uroot -p
2.5 安装消息队列
消息队列安装在控制节点,这里推荐使用rabbitmq
- 安装软件包
[root@controller ~]# dnf install rabbitmq-server
- 启动服务
[root@controller ~]# systemctl start rabbitmq-server
- 配置openstack用户,RABBIT_PASS是openstack服务登录消息队里的密码,需要和后面各个服务的配置保持一致。
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Adding user "openstack" ...
Done. Don't forget to grant the user permissions to some virtual hosts! See 'rabbitmqctl help set_permissions' to learn more.
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
2.6安装缓存服务
消息队列安装在控制节点,这里推荐使用Memcached。
- 安装软件包
[root@controller ~]# dnf install memcached python3-memcached
- 修改配置文件/etc/sysconfig/memcached
[root@controller ~]# cat /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"
- 启动服务
[root@controller ~]# systemctl start memcached
-
部署服务
3.1 安装keystone
Keystone是OpenStack提供的鉴权服务,是整个OpenStack的入口,提供了租户隔离、用户认证、服务发现等功能,必须安装。
- 创建keystone数据库并授权
[root@controller ~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB[(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '000000';
MariaDB [(none)]> exit
注意:这里的000000是你为数据库设置的密码,这边建议根据自己的爱好进行设置,但此部署openstack涉及的密码较多建议设置为相同密码。
- 安装软件包
[root@controller ~]# dnf install openstack-keystone httpd mod_wsgi
- 配置Keystone
[root@controller ~]# vi /etc/keystone/keystone.conf [database] # 配置数据库入口 connection = mysql+pymysql://keystone:000000@controller/keystone [token] # 配置token provider provider = fernet
- 同步数据库
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
- 初始化Fernet密钥仓库
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
- 启动服务
[root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000 \
> --bootstrap-admin-url http://controller:5000/v3/ \
> --bootstrap-internal-url http://controller:5000/v3/ \
> --bootstrap-public-url http://controller:5000/v3/ \
> --bootstrap-region-id RegionOne
- 配置Apache HTTP server
修改配置文件/etc/httpd/conf/httpd.conf
[root@controller ~]# vi /etc/httpd/conf/httpd.conf
#修改内容如下,先查看是否有此内容,如果没有则添加
ServerName controller
创建一个软连接,使其能正确使用。
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
- 启动Apache HTTP服务
[root@controller ~]# systemctl enable httpd.service
[root@controller ~]# systemctl start httpd.service
[root@controller ~]# systemctl status httpd.service
- 创建环境变量配置
[root@controller ~]# cat << EOF >> ~/.admin-openrc
> export OS_PROJECT_DOMAIN_NAME=Default
> export OS_USER_DOMAIN_NAME=Default
> export OS_PROJECT_NAME=admin
> export OS_USERNAME=admin
> export OS_PASSWORD=000000
> export OS_AUTH_URL=http://controller:5000/v3
> export OS_IDENTITY_API_VERSION=3
> export OS_IMAGE_API_VERSION=2
> EOF
这里的000000是前面创建数据库的密码
(10)依次创建domain, projects, users, roles
需要先安装python3-openstackclient
[root@controller ~]# dnf install python3-openstackclient
导入环境变量并验证
[root@controller ~]# source ~/.admin-openrc
[root@controller ~]# env | grep OS_
创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建
[root@controller ~]# openstack domain create --description "An Example Domain" example
[root@controller ~]# openstack project create --domain default --description "Service Project" service
创建(non-admin)project myproject,user myuser 和 role myrole,为 myproject 和 myuser 添加角色myrole
[root@controller ~]# openstack project create --domain default --description "Demo Project" myproject
创建用户(这里需要设置密码,我设置的000000)
[root@controller ~]# openstack user create --domain default --password-prompt myuser
[root@controller ~]# openstack role create myrole
将角色 myrole
分配给用户 myuser
,并关联到项目 myproject
,并
验证角色是否已成功分配
[root@controller ~]# openstack role add --project myproject --user myuser myrole
[root@controller ~]# openstack role assignment list --project myproject --user myuser
- 验证
取消临时环境变量OS_AUTH_URL和OS_PASSWORD:
[root@controller ~]# source ~/.admin-openrc
[root@controller ~]# unset OS_AUTH_URL OS_PASSWORD
为admin用户请求token:
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
> --os-project-domain-name Default --os-user-domain-name Default \
> --os-project-name admin --os-username admin token issue
这里的密码是前面数据库的密码
为myuser用户请求token:
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
> --os-project-domain-name Default --os-user-domain-name Default \
> --os-project-name myproject --os-username myuser token issue
此处密码与上处相同
3.2 安装Glance
Glance 是 OpenStack 中的镜像服务(Image Service),负责管理和存储虚拟机镜像。它允许用户上传、下载、删除和查询虚拟机镜像,并支持多种镜像格式(如 QCOW2、RAW、VMDK 等)。Glance 是 OpenStack 计算服务(Nova)的核心组件之一,为虚拟机提供启动镜像,必须安装!!!
- 创建glance数据库并授权
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
-> IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
-> IDENTIFIED BY '000000';
MariaDB [(none)]> exit
- 初始化glance资源对象,导入环境变量并检查
[root@controller ~]# source ~/.admin-openrc
[root@controller ~]# env | grep OS_
- 创建用户时,命令行会提示输入密码,请输入自定义的密码,下文涉及到就填此密码
[root@controller ~]# openstack user create --domain default --password-prompt glance
- 添加glance用户到service project并指定admin角色:
[root@controller ~]# openstack role add --project service --user glance admin
- 创建glance服务实体:
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
- 创建glance API服务:
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
- 安装软件包
[root@controller ~]# dnf install openstack-glance
- 修改glance配置文件
[root@controller ~]# vi /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:000000@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 000000
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
解释:
[database]部分,配置数据库入口
[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口
[glance_store]部分,配置本地文件系统存储和镜像文件的位置
- 同步数据库
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
- 启动服务
[root@controller ~]# systemctl enable openstack-glance-api.service
[root@controller ~]# systemctl start openstack-glance-api.service
[root@controller ~]# systemctl status openstack-glance-api.service
- 验证
导入环境变量并验证
[root@controller ~]# source ~/.admin-openrc
[root@controller ~]# env | grep OS_
下载镜像
x86镜像下载:
[root@controller ~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
arm镜像下载:
[root@controller ~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-aarch64-disk.img
向Image服务上传镜像:
[root@controller ~]# openstack image create --disk-format qcow2 --container-format bare \
> --file cirros-0.4.0-x86_64-disk.img --public cirros
确认镜像上传并验证属性:
[root@controller ~]# openstack image list
3.3 安装Placement
Placement 是 OpenStack 中的一个核心服务,主要负责资源调度和分配。它是 OpenStack 计算服务(Nova)的重要组成部分,用于管理计算节点的资源(如 CPU、内存、存储等),并确保资源的有效利用和负载均衡
安装、配置Placement服务前,需要先创建相应的数据库、服务凭证和API endpoints。
- 创建数据库
使用root用户访问数据库服务:
[root@controller ~]# mysql -u root -p
创建placement数据库:
MariaDB [(none)]> CREATE DATABASE placement;
授权数据库访问:(这里的000000是数据库密码)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
-> IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
-> IDENTIFIED BY '000000';
退出数据库访问客户端:
MariaDB [(none)]> exit
- 配置用户和Endpoints
source admin凭证,以获取admin命令行权限:
[root@controller ~]# source ~/.admin-openrc
创建placement用户并设置用户密码:
[root@controller ~]# openstack user create --domain default --password-prompt placement
User Password:000000
Repeat User Password:000000
添加placement用户到service project并指定admin角色:
[root@controller ~]# openstack role add --project service --user placement admin
创建placement服务实体:
[root@controller ~]# openstack service create --name placement \
> --description "Placement API" placement
创建Placement API服务endpoints:
[root@controller ~]# openstack endpoint create --region RegionOne \
> placement public http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne \
> placement internal http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne \
> placement admin http://controller:8778
- 安装及配置组件
安装软件包:
[root@controller ~]# dnf install openstack-placement-api
编辑/etc/placement/placement.conf配置文件,完成如下操作:
[root@controller ~]# vi /etc/placement/placement.conf
# 在[placement_database]字段下添加此内容,配置数据库入口。在514行处
[placement_database]
connection = mysql+pymysql://placement:000000@controller/placement
在[api]和[keystone_authtoken]部分,配置身份认证服务入口:(分别在191行和241行处)
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 000000 #此处为placement用户的密码
数据库同步,填充Placement数据库:
[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement
- 重启服务
重启httpd服务:
[root@controller ~]# systemctl restart httpd
- 验证
source admin凭证,以获取admin命令行权限
[root@controller ~]# source ~/.admin-openrc
执行状态检查:
[root@controller ~]# placement-status upgrade check
这里可以看到Policy File JSON to YAML Migration的结果为Failure。这是因为在Placement中,JSON格式的policy文件从Wallaby版本开始已处于deprecated状态。可以参考提示,使用oslopolicy-convert-json-to-yaml工具 将现有的JSON格式policy文件转化为YAML格式。
[root@controller ~]# oslopolicy-convert-json-to-yaml --namespace placement \
> --policy-file /etc/placement/policy.json \
> --output-file /etc/placement/policy.yaml
[root@controller ~]# mv /etc/placement/policy.json{,.bak}
注:当前环境中此问题可忽略,不影响运行。
针对placement API运行命令:
安装osc-placement插件:
[root@controller ~]# dnf install python3-osc-placement
列出可用的资源类别及特性:
[root@controller ~]# openstack --os-placement-api-version 1.2 resource class list --sort-column name
[root@controller ~]# openstack --os-placement-api-version 1.6 trait list --sort-column name
3.4 安装nova
Nova 是 OpenStack 中的核心组件之一,负责管理虚拟机实例(VM)的生命周期。它提供了虚拟机的创建、调度、启动、停止、重启、删除等功能。Nova 依赖于其他 OpenStack 组件(如 Keystone 用于身份认证,Glance 用于镜像管理,Neutron 用于网络管理等)来完成其工作。
- 创建数据库
使用root用户访问数据库服务:
[root@controller ~]# mysql -u root -p
创建nova_api、nova和nova_cell0数据库:
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
授权数据库访问:
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
-> IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.005 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
-> IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.002 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
-> IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.006 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
-> IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.006 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
-> IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.004 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
-> IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.005 sec)
此处的000000是数据库的密码
退出数据库访问客户端:
MariaDB [(none)]> exit
(2)配置用户和Endpoints
source admin凭证,以获取admin命令行权限:
[root@controller ~]# source ~/.admin-openrc
创建nova用户并设置用户密码:
[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:000000
Repeat User Password:000000
添加nova用户到service project并指定admin角色:
[root@controller ~]# openstack role add --project service --user nova admin
创建nova服务实体:
[root@controller ~]# openstack service create --name nova \
> --description "OpenStack Compute" compute
创建Nova API服务endpoints:
[root@controller ~]# openstack endpoint create --region RegionOne \
> compute public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne \
> compute internal http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne \
> compute admin http://controller:8774/v2.1
(3)安装及配置组件
安装软件包:
[root@controller ~]# dnf install openstack-nova-api openstack-nova-conductor \
> openstack-nova-novncproxy openstack-nova-scheduler
编辑/etc/nova/nova.conf配置文件,完成如下操作:
在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用controller节点管理IP配置my_ip,显式定义log_dir:
[root@controller ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@controller:5672/
my_ip = 192.168.200.150
log_dir = /var/log/nova
此处的000000 为RabbitMQ中openstack账户的密码,my_ip为本机IP地址
在[api_database]和[database]部分,配置数据库入口:1088行和1821行
[api_database]
connection = mysql+pymysql://nova:000000@controller/nova_api
[database]
connection = mysql+pymysql://nova:000000@controller/nova
000000为nova相关数据库的密码。
在[api]和[keystone_authtoken]部分,配置身份认证服务入口:881行和2759行
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 000000
000000为nova用户的密码。
在[vnc]部分,启用并配置远程控制台入口:5424行
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
在[glance]部分,配置镜像服务API的地址:2120行
[glance]
api_servers = http://controller:9292
在[oslo_concurrency]部分,配置lock path:3818行
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]部分,配置placement服务的入口:4387行
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 000000 # placement用户的密码
数据库同步:
同步nova-api数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1 cell:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
同步nova数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
验证cell0和cell1注册正确:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
- 启动服务
[root@controller ~]# systemctl enable \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
[root@controller ~]# systemctl start \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
Compute节点部署
在计算节点执行以下操作。
- 安装软件包
[root@compute ~]# dnf install openstack-nova-compute
(2)编辑/etc/nova/nova.conf配置文件
在[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,使用Compute节点管理IP配置my_ip,显式定义compute_driver、instances_path、log_dir:
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@controller:5672/
my_ip = 192.168.200.151
compute_driver = libvirt.LibvirtDriver
instances_path = /var/lib/nova/instances
log_dir = /var/log/nova
在[api]和[keystone_authtoken]部分,配置身份认证服务入口:883行和2759行
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 000000
在[vnc]部分,启用并配置远程控制台入口:5423行
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
在[glance]部分,配置镜像服务API的地址:2120行
[glance]
api_servers = http://controller:9292
在[oslo_concurrency]部分,配置lock path:3818行
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]部分,配置placement服务的入口:4387行
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 000000
000000为placement用户的密码
(3)确认计算节点是否支持虚拟机硬件加速(x86_64)
处理器为x86_64架构时,可通过运行如下命令确认是否支持硬件加速:
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。编辑/etc/nova/nova.conf的[libvirt]部分:2929行
[libvirt]
virt_type = qemu
如果返回值为1或更大的值,则支持硬件加速,不需要进行额外的配置。
(4)确认计算节点是否支持虚拟机硬件加速(arm64)
处理器为arm64架构时,可通过运行如下命令确认是否支持硬件加速:
[root@compute ~]# virt-host-validate
# 该命令由libvirt提供,此时libvirt应已作为openstack-nova-compute依赖被安装,环境中已有此命令
显示FAIL时,表示不支持硬件加速,需要配置libvirt使用QEMU而不是默认的KVM。
编辑/etc/nova/nova.conf的[libvirt]部分:2929行
[libvirt]
virt_type = qemu
显示PASS时,表示支持硬件加速,不需要进行额外的配置。
QEMU: Checking if device /dev/kvm exists: PASS
(5)配置qemu(仅arm64)
仅当处理器为arm64架构时需要执行此操作。
编辑/etc/libvirt/qemu.conf:
[root@compute ~]# vi /etc/libvirt/qemu.conf
nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
/usr/share/AAVMF/AAVMF_VARS.fd", \
"/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
/usr/share/edk2/aarch64/vars-template-pflash.raw"]
编辑/etc/qemu/firmware/edk2-aarch64.json
[root@compute ~]# vi /etc/qemu/firmware/edk2-aarch64.json
{
"description": "UEFI firmware for ARM64 virtual machines",
"interface-types": [
"uefi"
],
"mapping": {
"device": "flash",
"executable": {
"filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw",
"format": "raw"
},
"nvram-template": {
"filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw",
"format": "raw"
}
},
"targets": [
{
"architecture": "aarch64",
"machines": [
"virt-*"
]
}
],
"features": [
],
"tags": [
]
}
- 启动服务
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service
Controller节点
在控制节点执行以下操作。
添加计算节点到openstack集群
(1)source admin凭证,以获取admin命令行权限:
[root@controller ~]# source ~/.admin-openrc
确认nova-compute服务已识别到数据库中:
[root@controller ~]# openstack compute service list --service nova-compute
发现计算节点,将计算节点添加到cell数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
- 验证
列出服务组件,验证每个流程都成功启动和注册:
[root@controller ~]# openstack compute service list
列出身份服务中的API端点,验证与身份服务的连接:
[root@controller ~]# openstack catalog list
列出镜像服务中的镜像,验证与镜像服务的连接:
[root@controller ~]# openstack image list
检查cells是否运作成功,以及其他必要条件是否已具备。
[root@controller ~]# openstack image list
3.5 安装Neutron
Controller节点
Neutron 是 OpenStack 中的网络服务组件,负责为 OpenStack 环境提供网络连接和 IP 地址管理。它允许用户创建和管理虚拟网络、子网、路由器、安全组等网络资源,从而为虚拟机(VM)提供网络功能
(1)创建 keystone 数据库并授权
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> exit
(2)设置环境变量
[root@controller ~]# source ~/.admin-openrc
[root@controller ~]# env | grep OS_
(3)创建用户和服务,并记住创建neutron用户时输入的密码,
[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:000000
Repeat User Password:000000
[root@controller ~]# openstack role add --project service --user neutron admin
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
部署 Neutron API 服务
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
(4)安装软件包
[root@controller ~]# dnf install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2
(5)配置Neutron
修改/etc/neutron/neutron.conf
[root@controller ~]# vi /etc/neutron/neutron.conf
[database] #468行
connection = mysql+pymysql://neutron:000000@controller/neutron
[DEFAULT] #1行
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken] #598行
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 000000
[nova] #此段无直接加在772行
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = 000000
[oslo_concurrency] #770行
lock_path = /var/lib/neutron/tmp
配置ML2,ML2具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge**
修改/etc/neutron/plugins/ml2/ml2_conf.ini #此处都没有直接加在开头即可
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = true
local_ip = 192.168.200.150
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置Layer-3代理
修改/etc/neutron/l3_agent.ini
[root@controller ~]# vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
配置DHCP代理 修改/etc/neutron/dhcp_agent.ini
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置metadata代理
修改/etc/neutron/metadata_agent.ini
[root@controller ~]# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
配置nova服务使用neutron,修改/etc/nova/nova.conf
[root@controller ~]# vi /etc/nova/nova.conf
[neutron] #3581行
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
创建/etc/neutron/plugin.ini的符号链接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启nova api服务
[root@controller ~]# systemctl restart openstack-nova-api
启动网络服务
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service \
> neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service \
> neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
Compute节点:
- 安装软件包
[root@compute ~]# dnf install openstack-neutron-linuxbridge ebtables ipset -y
(2)配置Neutron
修改/etc/neutron/neutron.conf
[root@compute ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
[keystone_authtoken] #592行
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 000000
[oslo_concurrency] #763行
lock_path = /var/lib/neutron/tmp
修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini #直接添加即可
[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置nova compute服务使用neutron,修改/etc/nova/nova.conf
[root@compute ~]# vi /etc/nova/nova.conf
[neutron] #3581行
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 000000
重启nova-compute服务
[root@compute ~]# systemctl restart openstack-nova-compute.service
启动Neutron linuxbridge agent服务
[root@compute ~]# systemctl enable neutron-linuxbridge-agent
[root@compute ~]# systemctl start neutron-linuxbridge-agent
[root@compute ~]# systemctl status neutron-linuxbridge-agent
3.6安装Cinder
Cinder是OpenStack的存储服务,提供块设备的创建、发放、备份等功能。
Controller节点:
(1)初始化数据库 #000000是我为cinder设置的密码
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> exit
- 初始化Keystone资源对象
[root@controller ~]# source ~/.admin-openrc
创建用户
[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password:000000
Repeat User Password:000000
[root@controller ~]# openstack role add --project service --user cinder admin
[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
- 安装软件包
[root@controller ~]# dnf install openstack-cinder-api openstack-cinder-scheduler
(4)修改cinder配置文件/etc/cinder/cinder.conf
[root@controller ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
my_ip = 192.168.200.150
[database] #420行
connection = mysql+pymysql://cinder:000000@controller/cinder
[keystone_authtoken] #627行
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = cinder
password = 000000
[oslo_concurrency] #798行
lock_path = /var/lib/cinder/tmp
(5)数据库同步
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
(6)修改nova配置/etc/nova/nova.conf
[root@controller ~]# vi /etc/nova/nova.conf
[cinder] #1439行
os_region_name = RegionOne
(7)启动服务
[root@controller ~]# systemctl restart openstack-nova-api
[root@controller ~]# systemctl start openstack-cinder-api openstack-cinder-scheduler
[root@controller ~]# systemctl status openstack-cinder-api openstack-cinder-scheduler
Storage节点:
Storage节点要提前准备至少一块硬盘,作为cinder的存储后端,下文默认storage节点已经存在一块未使用的硬盘,设备名称为/dev/sda,用户在配置过程中,请按照真实环境信息进行名称替换。
Cinder支持很多类型的后端存储,本指导使用最简单的lvm为参考,如果您想使用如ceph等其他后端,请自行配置。
- 安装软件包
[root@storage ~]# dnf install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup
(2)配置lvm卷组
[root@storage ~]# pvcreate /dev/sda
[root@storage ~]# vgcreate cinder-volumes /dev/sda
(3)修改cinder配置/etc/cinder/cinder.conf
[root@storage ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
my_ip = 192.168.200.152
enabled_backends = lvm
glance_api_servers = http://controller:9292
[keystone_authtoken] #628行
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 000000
[database] #422行
connection = mysql+pymysql://cinder:000000@controller/cinder
[lvm] #800行,此段无添加到800行即可
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[oslo_concurrency] #800行
lock_path = /var/lib/cinder/tmp
(4)配置cinder backup (可选)
cinder-backup是可选的备份服务,cinder同样支持很多种备份后端,本文使用swift存储,如果您想使用如NFS等后端,请自行配置,例如可以参考OpenStack官方文档对NFS的配置说明。
(5)修改/etc/cinder/cinder.conf,在[DEFAULT]中新增
[root@storage ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
backup_swift_url = SWIFT_URL
这里的SWIFT_URL是指环境中swift服务的URL,在部署完swift服务后,执行openstack catalog show object-store命令获取。
(6)启动服务
[root@storage ~]# systemctl start openstack-cinder-volume target
[root@storage ~]# systemctl start openstack-cinder-backup (可选)
(7)至此,Cinder服务的部署已全部完成,可以在controller通过以下命令进行简单的验证
[root@controller ~]# source ~/.admin-openrc
[root@controller ~]# openstack storage service listopenstack volume list
[root@controller ~]# openstack volume list
3.7安装Horizon
Horizon是OpenStack提供的前端页面,可以让用户通过网页鼠标的操作来控制OpenStack集群,而不用繁琐的CLI命令行。Horizon一般部署在控制节点。
Controller节点:
(1)安装软件包
[root@controller ~]# dnf install openstack-dashboard
(2)修改配置文件/etc/openstack-dashboard/local_settings
[root@controller ~]# vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
OPENSTACK_KEYSTONE_URL = "http://controller:5000/v3"
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
WEBROOT = '/dashboard'
POLICY_FILES_PATH = "/etc/openstack-dashboard"
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
(3)重启服务
[root@controller ~]# systemctl restart httpd
至此,horizon服务的部署已全部完成,打开浏览器,输入http://192.168.200.150/dashboard,打开horizon登录页面。
3.8安装Trove
Trove是OpenStack的数据库服务,如果用户使用OpenStack提供的数据库服务则推荐使用该组件。否则,可以不用安装。
Controller节点:
(1)创建数据库。
数据库服务在数据库中存储信息,创建一个trove用户可以访问的trove数据库,
[root@controller ~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> exit
(2)创建服务凭证以及API端点。
创建服务凭证。
# 创建trove用户
[root@controller ~]# openstack user create --domain default --password-prompt trove
User Password:000000
Repeat User Password:000000
# 添加admin角色
[root@controller ~]# openstack role add --project service --user trove admin
# 创建database服务
[root@controller ~]# openstack service create --name trove --description "Database service" database
创建API端点
[root@controller ~]# openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
(3)安装Trove。
[root@controller ~]# dnf install openstack-trove python-troveclient
(4)修改配置文件。
编辑/etc/trove/trove.conf。
[root@controller ~]# vi /etc/trove/trove.conf
[DEFAULT]
bind_host=192.168.200.150
log_dir = /var/log/trove
network_driver = trove.network.neutron.NeutronDriver
network_label_regex=.*
management_security_groups = <manage security group>
nova_keypair = trove-mgmt
default_datastore = mysql
taskmanager_manager = trove.taskmanager.manager.Manager
trove_api_workers = 5
transport_url = rabbit://openstack:000000@controller:5672/
reboot_time_out = 300
usage_timeout = 900
agent_call_high_timeout = 1200
use_syslog = False
debug = True
[database] #854行
connection = mysql+pymysql://trove:000000@controller/trove
[keystone_authtoken] #931行
auth_url = http://controller:5000/v3/
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
password = trove
username = 000000
[service_credentials] #2025行
auth_url = http://controller:5000/v3/
region_name = RegionOne
project_name = service
project_domain_name = Default
user_domain_name = Default
username = trove
password = 000000
[mariadb] #1207行
tcp_ports = 3306,4444,4567,4568
[mysql] #1310行
tcp_ports = 3306
[postgresql] #1931行
tcp_ports = 5432
(5)编辑/etc/trove/trove-guestagent.conf。
[root@controller ~]# vi /etc/trove/trove-guestagent.conf
[DEFAULT]
log_file = trove-guestagent.log
log_dir = /var/log/trove/
ignore_users = os_admin
control_exchange = trove
transport_url = rabbit://openstack:000000@controller:5672/
rpc_backend = rabbit
command_process_timeout = 60
use_syslog = False
debug = True
[service_credentials]
auth_url = http://controller:5000/v3/
region_name = RegionOne
project_name = service
password = 000000
project_domain_name = Default
user_domain_name = Default
username = trove
[mysql]
docker_image = your-registry/your-repo/mysql
backup_docker_image = your-registry/your-repo/db-backup-mysql:1.1.0
(6)数据库同步。
[root@controller ~]# su -s /bin/sh -c "trove-manage db_sync" trove
(7) 配置服务自启
[root@controller ~]# systemctl enable openstack-trove-api.service openstack-trove-taskmanager.service \
> openstack-trove-conductor.service
(8)启动服务
[root@controller ~]# systemctl start openstack-trove-api.service openstack-trove-taskmanager.service \
> openstack-trove-conductor.service
3.9安装Cyborg
Cyborg为OpenStack提供加速器设备的支持,包括 GPU, FPGA, ASIC, NP, SoCs, NVMe/NOF SSDs, ODP, DPDK/SPDK等等。
Controller节点:
- 初始化对应数据库
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE cyborg;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> exit
- 创建用户和服务,并记住创建cybory用户时输入的密码,用于配置CYBORG_PASS
[root@controller ~]# source ~/.admin-openrc
[root@controller ~]# openstack user create --domain default --password-prompt cyborg
User Password:000000
Repeat User Password:000000
[root@controller ~]# openstack role add --project service --user cyborg admin
[root@controller ~]# openstack service create --name cyborg --description "Acceleration Service" accelerator
使用uwsgi部署Cyborg api服务
[root@controller ~]# openstack endpoint create --region RegionOne accelerator public http://controller/accelerator/v2
[root@controller ~]# openstack endpoint create --region RegionOne accelerator internal http://controller/accelerator/v2
[root@controller ~]# openstack endpoint create --region RegionOne accelerator admin http://controller/accelerator/v2
(3)安装Cyborg
[root@controller ~]# dnf install openstack-cyborg
(4)配置Cyborg
修改/etc/cyborg/cyborg.conf
[root@controller ~]# vi /etc/cyborg/cyborg.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller:5672/
use_syslog = False
state_path = /var/lib/cyborg
debug = True
[api] #318行
host_ip = 0.0.0.0
[database] #361行
connection = mysql+pymysql://cyborg:000000@controller/cyborg
[service_catalog] #此段无,加到1692行即可
cafile = /opt/stack/data/ca-bundle.pem
project_domain_id = default
user_domain_id = default
project_name = service
password = 000000
username = cyborg
auth_url = http://controller:5000/v3/
auth_type = password
[placement] #1662行
project_domain_name = Default
project_name = service
user_domain_name = Default
password = password
username = PLACEMENT_PASS
auth_url = http://controller:5000/v3/
auth_type = password
auth_section = keystone_authtoken
[nova] #974行
project_domain_name = Default
project_name = service
user_domain_name = Default
password = 000000
username = nova
auth_url = http://controller:5000/v3/
auth_type = password
auth_section = keystone_authtoken
[keystone_authtoken] #778行
memcached_servers = localhost:11211
signing_dir = /var/cache/cyborg/api
cafile = /opt/stack/data/ca-bundle.pem
project_domain_name = Default
project_name = service
user_domain_name = Default
password = 000000
username = cyborg
auth_url = http://controller:5000/v3/
auth_type = password
(5)同步数据库表格
[root@controller ~]# cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
(6)启动Cyborg服务
[root@controller ~]# systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
[root@controller ~]# systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
3.10安装Aodh
Aodh可以根据由Ceilometer或者Gnocchi收集的监控数据创建告警,并设置触发规则。
Controller节点:
- 创建数据库
[root@controller ~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE aodh;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> exit
(2)创建服务凭证以及API端点。
创建服务凭证。
[root@controller ~]# openstack user create --domain default --password-prompt aodh
User Password:000000
Repeat User Password:000000
[root@controller ~]# openstack role add --project service --user aodh admin
[root@controller ~]# openstack service create --name aodh --description "Telemetry" alarming
(3)创建API端点
[root@controller ~]# openstack endpoint create --region RegionOne alarming public http://controller:8042
[root@controller ~]# openstack endpoint create --region RegionOne alarming internal http://controller:8042
[root@controller ~]# openstack endpoint create --region RegionOne alarming admin http://controller:8042
(4)安装Aodh
[root@controller ~]# dnf install openstack-aodh-api openstack-aodh-evaluator \
> openstack-aodh-notifier openstack-aodh-listener \
> openstack-aodh-expirer python3-aodhclient
(5)修改配置文件。
[root@controller ~]# vi /etc/aodh/aodh.conf
[database] #335行
connection = mysql+pymysql://aodh:000000@controller/aodh
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
[keystone_authtoken] #491行
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = aodh
password = 000000
[service_credentials] #1223行
auth_type = password
auth_url = http://controller:5000/v3
project_domain_id = default
user_domain_id = default
project_name = service
username = aodh
password = 000000
interface = internalURL
region_name = RegionOne
- 同步数据库。
[root@controller ~]# aodh-dbsync
(7)完成安装。
# 配置服务自启
[root@controller ~]# systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service \
> openstack-aodh-notifier.service openstack-aodh-listener.service
# 启动服务
[root@controller ~]# systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service \
> openstack-aodh-notifier.service openstack-aodh-listener.service
3.11安装Gnocchi
Gnocchi是一个开源的时间序列数据库,可以对接Ceilometer。
Controller节点:
- 创建数据库。
[root@controller ~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE gnocchi;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> exit
(2)创建服务凭证以及API端点。
创建服务凭证。
[root@controller ~]# openstack user create --domain default --password-prompt gnocchi
User Password:000000
Repeat User Password:000000
[root@controller ~]# openstack role add --project service --user gnocchi admin
[root@controller ~]# openstack service create --name gnocchi --description "Metric Service" metric
创建API端点
[root@controller ~]# openstack endpoint create --region RegionOne metric public http://controller:8041
[root@controller ~]# openstack endpoint create --region RegionOne metric internal http://controller:8041
[root@controller ~]# openstack endpoint create --region RegionOne metric admin http://controller:8041
(3)安装Gnocchi。
[root@controller ~]# dnf install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
(4)修改配置文件。
[root@controller ~]# vi /etc/gnocchi/gnocchi.conf
[api] #67行
auth_mode = keystone
port = 8041
uwsgi_mode = http-socket
[keystone_authtoken] #此段无加在最后即可
auth_type = password
auth_url = http://controller:5000/v3
project_domain_name = Default
user_domain_name = Default
project_name = service
username = gnocchi
password = 000000
interface = internalURL
region_name = RegionOne
[indexer] #347行
url = mysql+pymysql://gnocchi:000000@controller/gnocchi
[storage] #479行
# coordination_url is not required but specifying one will improve
# performance with better workload division across workers.
# coordination_url = redis://controller:6379
file_basepath = /var/lib/gnocchi
driver = file
(5)同步数据库。
[root@controller ~]# gnocchi-upgrade
(6)完成安装。
# 配置服务自启
[root@controller ~]# systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
# 启动服务
[root@controller ~]# systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
3.12安装Ceilometer
Ceilometer是OpenStack中负责数据收集的服务。
Controller节点
(1)创建服务凭证。
[root@controller ~]# openstack user create --domain default --password-prompt ceilometer
User Password:000000
Repeat User Password:000000
[root@controller ~]# openstack role add --project service --user ceilometer admin
[root@controller ~]# openstack service create --name ceilometer --description "Telemetry" metering
(2)安装Ceilometer软件包。
[root@controller ~]# dnf install openstack-ceilometer-notification openstack-ceilometer-central
(3)编辑配置文件/etc/ceilometer/pipeline.yaml。
[root@controller ~]# vi /etc/ceilometer/pipeline.yaml
publishers:
# set address of Gnocchi
# + filter out Gnocchi-related activity meters (Swift driver)
# + set default archive policy
- gnocchi://?filter_project=service&archive_policy=low
(3)编辑配置文件/etc/ceilometer/ceilometer.conf。
[root@controller ~]# vi /etc/ceilometer/ceilometer.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
[service_credentials] #1133行
auth_type = password
auth_url = http://controller:5000/v3
project_domain_id = default
user_domain_id = default
project_name = service
username = ceilometer
password = 000000
interface = internalURL
region_name = RegionOne
(4)数据库同步。
[root@controller ~]# ceilometer-upgrade
(5)完成控制节点Ceilometer安装。
# 配置服务自启
[root@controller ~]# systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
# 启动服务
[root@controller ~]# systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
Compute节点:
(1)安装Ceilometer软件包。
[root@compute ~]# dnf install openstack-ceilometer-compute
[root@compute ~]# dnf install openstack-ceilometer-ipmi
(2)编辑配置文件/etc/ceilometer/ceilometer.conf。
[root@compute ~]# vi /etc/ceilometer/ceilometer.conf
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
[service_credentials] #1133行
auth_url = http://controller:5000
project_domain_id = default
user_domain_id = default
auth_type = password
username = ceilometer
project_name = service
password = 000000
interface = internalURL
region_name = RegionOne
(3)编辑配置文件/etc/nova/nova.conf。
[root@compute ~]# vi /etc/nova/nova.conf
[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
[notifications] #3769行
notify_on_state_change = vm_and_task_state
[oslo_messaging_notifications] #4114行
driver = messagingv2
(4)完成安装。
[root@compute ~]# systemctl enable openstack-ceilometer-compute.service
[root@compute ~]# systemctl enable openstack-ceilometer-ipmi.service
[root@compute ~]# systemctl start openstack-ceilometer-compute.service
[root@compute ~]# systemctl start openstack-ceilometer-ipmi.service
# 重启nova-compute服务
[root@compute ~]# systemctl restart openstack-nova-compute.service
到此open stack安装成功
访问http://192.168.200.150/dashboard 此处账号为:admin 密码为:000000 域为: default
点击登录即可
更多推荐
所有评论(0)