前言:
早期jenkins承担了kubernetes中的ci/cd全部功能Jenkins Pipeline演进,这里准备将cd持续集成拆分出来到spinnaker!
当然了 正常的思路应该是将jenkins spinnaker的用户账号先打通集成ldap.spinnaker账号系统已经集成ldap.jenkins之前也做过相关的试验。这里关于jenkins集成ldap的步骤就先省略了。毕竟目标是拆分pipeline流水线实践。账号系统 互通还没有那么有紧迫性!。当然了第一步我觉得还是少了镜像的扫描的步骤,先搞一波镜像的扫描!毕竟安全才是首位的
关于jenkins流水线pipeline的镜像扫描
注:image 镜像仓库使用了harbor
Trivy
harbor默认的镜像扫描器是Trivy。早的时候貌似是clair?记得
查看harbor的api (不能与流水线集成提供扫描报告)
看了一眼harbor 的api。harbor 的api可以直接scan进行扫描:
但是这里有个缺陷:我想出报告直接展示在jenkins流水线中啊,GET也只能获取log,我总不能jenkins流水线集成了harbor中自动扫描,扫描完成了继续来harbor中登陆确认镜像有没有漏洞吧?所以这个功能对外来说很是鸡肋。但是抱着学习的态度体验一下jenkins pipeline中镜像的自动扫描,首先参考了一下泽阳大佬的镜像自动清理的实例:
import groovy.json.JsonSlurper
//Docker 镜像仓库信息
registryServer = "harbor.layame.com"
projectName = "${JOB_NAME}".split('-')[0]
repoName = "${JOB_NAME}"
imageName = "${registryServer}/${projectName}/${repoName}"
harborAPI = ""
//pipeline
pipeline{
agent { node { label "build01"}}
//设置构建触发器
triggers {
GenericTrigger( causeString: 'Generic Cause',
genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']],
printContributedVariables: true,
printPostContent: true,
regexpFilterExpression: '',
regexpFilterText: '',
silentResponse: true,
token: 'spinnaker-nginx-demo')
}
stages{
stage("CheckOut"){
steps{
script{
srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git"
branchName = branchName - "refs/heads/"
currentBuild.description = "Trigger by ${branchName}"
println("${branchName}")
checkout([$class: 'GitSCM',
branches: [[name: "${branchName}"]],
doGenerateSubmoduleConfigurations: false,
extensions: [],
submoduleCfg: [],
userRemoteConfigs: [[credentialsId: 'gitlab-admin-user',
url: "${srcUrl}"]]])
}
}
}
stage("Push Image "){
steps{
script{
withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {
sh """
sed -i -- "s/VER/${branchName}/g" app/index.html
docker login -u ${username} -p ${password} ${registryServer}
docker build -t ${imageName}:${data} .
docker push ${imageName}:${data}
docker rmi ${imageName}:${data}
"""
}
}
}
}
stage("scan Image "){
steps{
script{
withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {
sh """
sed -i -- "s/VER/${branchName}/g" app/index.html
docker login -u ${username} -p ${password} ${registryServer}
docker build -t ${imageName}:${data} .
docker push ${imageName}:${data}
docker rmi ${imageName}:${data}
"""
}
}
}
}
stage("Trigger File"){
steps {
script{
sh """
echo IMAGE=${imageName}:${data} >trigger.properties
echo ACTION=DEPLOY >> trigger.properties
cat trigger.properties
"""
archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false
}
}
}
}
}
改造spinnaker-nginx-demo pipeline
依旧拿我spinnaker-nginx-demo的实例去验证,参见:关于jenkins的配置-spinnaker-nginx-demo,修改pipeline如下:
//Docker 镜像仓库信息
registryServer = "harbor.xxxx.com"
projectName = "${JOB_NAME}".split('-')[0]
repoName = "${JOB_NAME}"
imageName = "${registryServer}/${projectName}/${repoName}"
//pipeline
pipeline{
agent { node { label "build01"}}
//设置构建触发器
triggers {
GenericTrigger( causeString: 'Generic Cause',
genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']],
printContributedVariables: true,
printPostContent: true,
regexpFilterExpression: '',
regexpFilterText: '',
silentResponse: true,
token: 'spinnaker-nginx-demo')
}
stages{
stage("CheckOut"){
steps{
script{
srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git"
branchName = branchName - "refs/heads/"
currentBuild.description = "Trigger by ${branchName}"
println("${branchName}")
checkout([$class: 'GitSCM',
branches: [[name: "${branchName}"]],
doGenerateSubmoduleConfigurations: false,
extensions: [],
submoduleCfg: [],
userRemoteConfigs: [[credentialsId: 'gitlab-admin-user',
url: "${srcUrl}"]]])
}
}
}
stage("Push Image "){
steps{
script{
withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {
sh """
sed -i -- "s/VER/${branchName}/g" app/index.html
docker login -u ${username} -p ${password} ${registryServer}
docker build -t ${imageName}:${data} .
docker push ${imageName}:${data}
docker rmi ${imageName}:${data}
"""
}
}
}
}
stage("scan Image "){
steps{
script{
withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {
harborAPI = "https://harbor.xxxx.com/api/v2.0/projects/${projectName}/repositories/${repoName}"
apiURL = "artifacts/${data}/scan"
sh """ curl -X POST "${harborAPI}/${apiURL}" -H "accept: application/json" -u ${username}:${password} """
}
}
}
}
stage("Trigger File"){
steps {
script{
sh """
echo IMAGE=${imageName}:${data} >trigger.properties
echo ACTION=DEPLOY >> trigger.properties
cat trigger.properties
"""
archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false
}
}
}
}
}
参考阳明大佬清理镜像的pipeline脚本进行了修改,增加了scan Image的stage! 其实都是参照harbor的api文档来的,更为详细的可以参考harbor的官方api。
触发jenkins构建
spinnaker-nginx-demo pipeline是gitlab触发的,更新gitlab仓库中随便一个master分支的文件触发jenkins构建:
登陆harbor仓库验证:
ok验证成功,当然了如果有其他的需求的可以参见harbor的api文档,当然了前提是harbor支持的功能…,不能jenkins中集成扫描报告,让我放弃了harbor中的Trivy,当然了也有可能是我对Trivy不熟,没有去深入看一遍Trivy的文档,只是看了harbor的api…
anchore-engine
anchore-engine helm的安装
anchore-engine,是无意间搜jenkins scan image这些关键词的时候网上看到的:https://cloud.tencent.com/developer/article/1666535.很不错的文章,然后看了一眼官网,有helm的安装方式:https://engine.anchore.io/docs/install/helm/,安装一下测试一下
[root@k8s-master-01 anchore-engine]# helm repo add anchore https://charts.anchore.io
[root@k8s-master-01 anchore-engine]# helm repo list
注:哈哈哈 之前我搞过一遍啊声明一下 所以helm 仓库之前加过 嗯也安装过1.14.6的版本。但是没有成功跟jenkins整合成功就想抱着试试最新版本的想法…。但是现实貌似打败了我,估计是jenkins的插件太老了?(下面一步步验证,是自己没有深入研究…其实是可以的)顺便复习一遍helm命令!
[root@k8s-master-01 anchore-engine]# helm search repo anchore/anchore-engine
[root@k8s-master-01 anchore-engine]# helm repo update
[root@k8s-master-01 anchore-engine]# helm search repo anchore/anchore-engine
[root@k8s-master-01 anchore-engine]# helm fetch anchore/anchore-engine
[root@k8s-master-01 anchore-engine]# ls
[root@k8s-master-01 anchore-engine]# tar zxvf anchore-engine-1.15.1.tgz
[root@k8s-master-01 anchore-engine]# cd anchore-engine
vim values.yaml
就修改了一下存储大小设置了一下密码跟邮箱!
helm install anchore-engine -f values.yaml . -n anchore-engine
发现一个坑爹的…为什么kubernetes的domain 都默认设置的cluster.local?看了一遍配置文件也没有找到修改的…
jenkins的配置
jenkins首先要安装插件
配置:
系统管理-系统配置:
构建流水线:
由于这里是测试就先搞了一下使用Anchore Enine来完善DevSecOps工具链里面的demo(修改了一下构建节点,github仓库还有dockerhub仓库秘钥):
jenkins 新建pipeline任务anchore-enchore
pipeline {
agent { node { label "build01"}}
environment {
registry = "duiniwukenaihe/spinnaker-cd" //仓库地址,用于把镜像push到镜像仓库。按照实际情况修改
registryCredential = 'duiniwukenaihe' //用于登陆镜像仓库的凭证,按照实际情况修改
}
stages {
//jenkins从代码仓库里下载代码
stage('Cloning Git') {
steps {
git 'https://github.com.cnpmjs.org/duiniwukenaihe/docker-dvwa.git'
}
}
//构建镜像
stage('Build Image') {
steps {
script {
app = docker.build(registry+ ":$BUILD_NUMBER")
}
}
}
//把镜像推送到仓库
stage('Push Image') {
steps {
script {
docker.withRegistry('', registryCredential ) {
app.push()
}
}
}
}
//镜像扫描
stage('Container Security Scan') {
steps {
sh 'echo "'+registry+':$BUILD_NUMBER `pwd`/Dockerfile" > anchore_images'
anchore engineRetries: "240", name: 'anchore_images'
}
}
stage('Cleanup') {
steps {
sh script: "docker rmi " + registry+ ":$BUILD_NUMBER"
}
}
}
}
注:github.com修改为github.com.cnpmjs.org就是为了加速…毕竟墙裂无法pull的动代码
运行pipeline任务
反正就是搞了好几次都是失败告终…,不求甚解,慢慢剥离找到问题ing…
docker-compose 安装anchore-engine
按照教程使用Anchore Enine来完善DevSecOps工具链
搞了一个docker-compose的部署方式:
注:我的集群默认cri 是containerd,k8s-node-06节点是docker做运行时,且不参与调度,anchore-engine就准备在这台服务器上面安装了!内网ip:10.0.4.18。
前提安装docker-compose:
docker-compose up -d
直接使用了默认的yaml文件并没有进行额外修改,比较前期只是测试。
# curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml
# docker-compose up -d
# This is a docker-compose file for development purposes. It refereneces unstable developer builds from the HEAD of master branch in https://github.com/anchore/anchore-engine
# For a compose file intended for use with a released version, see https://engine.anchore.io/docs/quickstart/
#
---
version: '2.1'
volumes:
anchore-db-volume:
# Set this to 'true' to use an external volume. In which case, it must be created manually with "docker volume create anchore-db-volume"
external: false
services:
# The primary API endpoint service
api:
image: anchore/anchore-engine:v1.0.0
depends_on:
- db
- catalog
ports:
- "8228:8228"
logging:
driver: "json-file"
options:
max-size: 100m
environment:
- ANCHORE_ENDPOINT_HOSTNAME=api
- ANCHORE_ADMIN_PASSWORD=foobar
- ANCHORE_DB_HOST=db
- ANCHORE_DB_PASSWORD=mysecretpassword
command: ["anchore-manager", "service", "start", "apiext"]
# Catalog is the primary persistence and state manager of the system
catalog:
image: anchore/anchore-engine:v1.0.0
depends_on:
- db
logging:
driver: "json-file"
options:
max-size: 100m
expose:
- 8228
environment:
- ANCHORE_ENDPOINT_HOSTNAME=catalog
- ANCHORE_ADMIN_PASSWORD=foobar
- ANCHORE_DB_HOST=db
- ANCHORE_DB_PASSWORD=mysecretpassword
command: ["anchore-manager", "service", "start", "catalog"]
queue:
image: anchore/anchore-engine:v1.0.0
depends_on:
- db
- catalog
expose:
- 8228
logging:
driver: "json-file"
options:
max-size: 100m
environment:
- ANCHORE_ENDPOINT_HOSTNAME=queue
- ANCHORE_ADMIN_PASSWORD=foobar
- ANCHORE_DB_HOST=db
- ANCHORE_DB_PASSWORD=mysecretpassword
command: ["anchore-manager", "service", "start", "simplequeue"]
policy-engine:
image: anchore/anchore-engine:v1.0.0
depends_on:
- db
- catalog
expose:
- 8228
logging:
driver: "json-file"
options:
max-size: 100m
environment:
- ANCHORE_ENDPOINT_HOSTNAME=policy-engine
- ANCHORE_ADMIN_PASSWORD=foobar
- ANCHORE_DB_HOST=db
- ANCHORE_DB_PASSWORD=mysecretpassword
- ANCHORE_VULNERABILITIES_PROVIDER=grype
command: ["anchore-manager", "service", "start", "policy_engine"]
analyzer:
image: anchore/anchore-engine:v1.0.0
depends_on:
- db
- catalog
expose:
- 8228
logging:
driver: "json-file"
options:
max-size: 100m
environment:
- ANCHORE_ENDPOINT_HOSTNAME=analyzer
- ANCHORE_ADMIN_PASSWORD=foobar
- ANCHORE_DB_HOST=db
- ANCHORE_DB_PASSWORD=mysecretpassword
volumes:
- /analysis_scratch
command: ["anchore-manager", "service", "start", "analyzer"]
db:
image: "postgres:9"
volumes:
- anchore-db-volume:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=mysecretpassword
expose:
- 5432
logging:
driver: "json-file"
options:
max-size: 100m
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
# # Uncomment this section to add a prometheus instance to gather metrics. This is mostly for quickstart to demonstrate prometheus metrics exported
# prometheus:
# image: docker.io/prom/prometheus:latest
# depends_on:
# - api
# volumes:
# - ./anchore-prometheus.yml:/etc/prometheus/prometheus.yml:z
# logging:
# driver: "json-file"
# options:
# max-size: 100m
# ports:
# - "9090:9090"
#
# # Uncomment this section to run a swagger UI service, for inspecting and interacting with the anchore engine API via a browser (http://localhost:8080 by default, change if needed in both sections below)
# swagger-ui-nginx:
# image: docker.io/nginx:latest
# depends_on:
# - api
# - swagger-ui
# ports:
# - "8080:8080"
# volumes:
# - ./anchore-swaggerui-nginx.conf:/etc/nginx/nginx.conf:z
# logging:
# driver: "json-file"
# options:
# max-size: 100m
# swagger-ui:
# image: docker.io/swaggerapi/swagger-ui
# environment:
# - URL=http://localhost:8080/v1/swagger.json
# logging:
# driver: "json-file"
# options:
# max-size: 100m
#
[root@k8s-node-06 anchore]# docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------------
anchore_analyzer_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchore_api_1 /docker-entrypoint.sh anch ... Up (healthy) 0.0.0.0:8228->8228/tcp
anchore_catalog_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchore_db_1 docker-entrypoint.sh postgres Up (healthy) 5432/tcp
anchore_policy-engine_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchore_queue_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
修改jenkins配置
Pipeline Test:
错误的猜测:
helm部署不可以初步估计是我的containerd不是docker的原因?又或者是服务端版本太高?
随之而来的问题:
如何扫描私有仓库镜像?
但是随之问题又来了:anchore-enchorepipeline中镜像仓库默认的是dockerhub,我的仓库是私有harbor仓库,spinnaker-nginx-demo的应用pipeline增加扫描都跑不起来…
//Docker 镜像仓库信息
registryServer = "harbor.xxxx.com"
projectName = "${JOB_NAME}".split('-')[0]
repoName = "${JOB_NAME}"
imageName = "${registryServer}/${projectName}/${repoName}"
//pipeline
pipeline{
agent { node { label "build01"}}
//设置构建触发器
triggers {
GenericTrigger( causeString: 'Generic Cause',
genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']],
printContributedVariables: true,
printPostContent: true,
regexpFilterExpression: '',
regexpFilterText: '',
silentResponse: true,
token: 'spinnaker-nginx-demo')
}
stages{
stage("CheckOut"){
steps{
script{
srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git"
branchName = branchName - "refs/heads/"
currentBuild.description = "Trigger by ${branchName}"
println("${branchName}")
checkout([$class: 'GitSCM',
branches: [[name: "${branchName}"]],
doGenerateSubmoduleConfigurations: false,
extensions: [],
submoduleCfg: [],
userRemoteConfigs: [[credentialsId: 'gitlab-admin-user',
url: "${srcUrl}"]]])
}
}
}
stage("Push Image "){
steps{
script{
withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {
sh """
sed -i -- "s/VER/${branchName}/g" app/index.html
docker login -u ${username} -p ${password} ${registryServer}
docker build -t ${imageName}:${data} .
docker push ${imageName}:${data}
docker rmi ${imageName}:${data}
"""
}
}
}
}
stage('Container Security Scan') {
steps {
script{
sh """
echo "开始扫描"
echo "${imageName}:${data} ${WORKSPACE}/Dockerfile" > anchore_images
"""
anchore engineRetries: "360",forceAnalyze: true, name: 'anchore_images'
}
}
}
stage("Trigger File"){
steps {
script{
sh """
echo IMAGE=${imageName}:${data} >trigger.properties
echo ACTION=DEPLOY >> trigger.properties
cat trigger.properties
"""
archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false
}
}
}
}
}
github issuse找到的灵感:
怎么回事?功夫不负有心人看了一遍github anchore仓库的issue:https://github.com/anchore/anchore-engine/issues/438,找到了解决方法…
增加私有仓库配置
[root@k8s-node-06 anchore]# docker exec -it d21c8ed1064d bash
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli registry add harbor.xxxx.com zhangpeng xxxxxx
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli --url http://10.0.4.18:8228/v1/ --u admin --p foobar --debug image add harbor.layame.com/spinnaker/spinnaker-nginx-demo:202111192008
嗯 我把我harbor的仓库加上了看一眼我 运行一下我的jenkins 貌似我的流水线就都可以出报告了
登陆anchore_api_1 容器验证:
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli image list
顺便修改一下helm搭建的anchore-enchore
同理 我现在怀疑我的helm部署的harbor也是这错误…开始怀疑错了,修改一下试试!
[root@k8s-master-01 anchore-engine]# kubectl get pods -n anchore-engine
NAME READY STATUS RESTARTS AGE
anchore-engine-anchore-engine-analyzer-fcf9ffcc8-dv955 1/1 Running 0 10h
anchore-engine-anchore-engine-api-7f98dc568-j6tsz 1/1 Running 0 10h
anchore-engine-anchore-engine-catalog-754b996b75-q5hqg 1/1 Running 0 10h
anchore-engine-anchore-engine-policy-745b6778f7-hbsvx 1/1 Running 0 10h
anchore-engine-anchore-engine-simplequeue-695df4498-wgss4 1/1 Running 0 10h
anchore-engine-postgresql-9cdbb5f7f-4dcnk 1/1 Running 0 10h
[root@k8s-master-01 anchore-engine]# kubectl get svc -n anchore-engine
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
anchore-engine-anchore-engine-api ClusterIP 172.19.255.231 <none> 8228/TCP 10h
anchore-engine-anchore-engine-catalog ClusterIP 172.19.254.163 <none> 8082/TCP 10h
anchore-engine-anchore-engine-policy ClusterIP 172.19.254.91 <none> 8087/TCP 10h
anchore-engine-anchore-engine-simplequeue ClusterIP 172.19.253.141 <none> 8083/TCP 10h
anchore-engine-postgresql ClusterIP 172.19.252.126 <none> 5432/TCP 10h
[root@k8s-master-01 anchore-engine]# kubectl run -i --tty anchore-cli --restart=Always --image anchore/engine-cli --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=xxxxxx --env ANCHORE_CLI_URL=http://172.19.255.231:8228/v1
[anchore@anchore-cli anchore-cli]$ anchore-cli registry add harbor.xxxx.com zhangpeng xxxxxxxx
[anchore@anchore-cli anchore-cli]$ anchore-cli --url http://172.19.255.231:8228/v1/ --u admin --p xxxx --debug image add harbor.xxxx.com/spinnaker/spinnaker-nginx-demo:202111192008
貌似也成功了!推翻一下我的运行时的假设or版本的问题
重新修改jenkins的配置为helm搭建anchore-engine的api地址,由于cluter.local的梗我很不喜欢直接使用了集群内service的地址:
运行jenkins 任务 spinnaker-nginx-demo pipeline
依然是修改gitlab文件触发pipeline任务,很是遗憾,高危漏洞检测未能通过FAIL,哈哈哈哈 但是流水线总算是跑通了:
比较一下Trivy与anchore-engine
拿spinnaker-nginx-demo 107制品镜像来对比,制品标签为harbor.xxxx.com/spinnaker/spinnaker-nginx-demo:202111201116:
anchore-engine报告:
what fack 为什么harbor的trivy扫描是没有漏洞?
瞬间陷入了要解决漏洞的强迫症轮回中…
总结一下:
- harbor自定镜像扫描插件tivy,嗯也可以选择clair 貌似也可以与anchore-engine打通
- anchore-engine要add参加私有仓库,heml安装拼装地址记得修改cluster.local,如果是自定义集群
- anchore-engine比trivy的扫描更为严格
- 要善于使用–help命令:anchore-cli --help