HPE 分兵两路进军超融合市场

关于HP超融合产品家族的变迁简史和相关评论,本文转自 HC 250? HC 380? What are they actually for?。

转帖:《HC 250? HC 380? What are they actually for?》 http://www.theregister.co.uk/2016/12/05/the_state_of_hpes_hyperconverged_play/

HP/HPE 超融合产品线发展如下图所示 HP/HPE hyper-converged product lines progression

+Comment

The positioning of these two hyper-converged systems is confusing. We observe that the physically larger and vSphere-only HC 380, with its brand number larger than the HC 250, is for use by operators requiring operational simplicity, whereas the physically smaller and denser HC 250 is for more complicated operations in both vSphere and Hyper-V environments, yet it covers ROBO needs where customers don’t have skilled IT staff.

Getting some ProLiant DL380 brand goodness in the HC 380 name, Apollo supercomputer brand messages in the HC 250 name, and not yet providing HCOE v2 to the HC 250, has provided confusing brand positioning.

It also seems that both systems should support Hyper-V and HCOE v2. That would make customers’ lives simpler. The HC 380 should have its scale-out limit go past 16 nodes. Thirty-two would be good and then it would match the non-standard and non-recommended 32-node limit of the HC 250. ®

Nutanix Exec: Cisco, HPE, VMware Can’t Provide AWS-Like Experience For Hyper-Converged Market

http://www.crn.com/news/data-center/300083025/nutanix-exec-cisco-hpe-vmware-cant-provide-aws-like-experience-for-hyper-converged-market.htm

One of Nutanix’s top executives said Cisco, Hewlett Packard Enterprise and VMware cannot provide a best-in-class experience for their hyper-converged products since they don’t own the entire software stack.

“Everybody and their dog is in this space right now,” said Sunil Potti, Nutanix’s chief product and development officer. “But unless you are owning most of the stack, you can’t provide that full experience with a single click.”

Potti told attendees of the Raymond James Technology Investors Conference on Monday that up until 18 months ago, much of the market felt that hyper-converged infrastructure – which combines compute, storage, networking and virtualization on server hardware - could be a good product, but questioned whether it would support all use cases.

But as end users became more adamant about having hyper-converged architectures deployed for their new workloads, Potti said the space has gone from being the exclusive domain of smaller IT vendors to attracting some of the biggest names in IT.

And while Potti said the big guys bring a lot of go-to-market muscle to the table, some of them don’t seem to grasp the importance of providing totally seamless functionality.

“Some of them are very formidable, but some of them are also sleeping giants because they are growing old,” Potti said at the conference at the Westin New York Grand Central. “You can’t just take a Nokia phone, slap on Windows and say ‘I won the smartphone war.’ There’s a reason why those things didn’t work.”

Potti pointed to the example of Cisco’s hyper-converged offering, which he said competes with VMware’s products, but is also dependent on VMware to properly function.

“None of them have their own virtualization experience,” Potti said. “When you go to Amazon, you don’t go buy VMWare for Amazon. It’s built in.”

A Cisco spokesman said its customers prefer having a hyperconverged product that integrates with existing converged infrastructure and traditional storage rather than having to create another infrastructure silo. More than 600 organizations have adopted Cisco’s HyperFlex since it was launched a few months ago, the spokesman said.

Paul Miller, VP of marketing for HPE’s converged data center infrastructure group, said HPE’s new hyperconverged offering is as easy as public cloud, allowing users adjust VMs from their cell phones with just a few clicks. And unlike the hyperconverged-only vendors, Miller said HPE’s hyperconverged offering won’t become and IT island since it can be managed across a large footprint of infrastructure.

VMware did not respond to a request for comment.

Potti said Nutanix, in contrast, evolved beyond simply being a storage technology company four years ago when it started building its own hypervisor and virtualization tools. As a result, Potti said, Nutanix has completely re-imagined how the software stack is built from the ground up.

“Nutanix is a platform player,” Potti said. “It’s not a product player … If you don’t provide the full stack, you can’t provide that Amazon-like experience.”

Nutanix’s prime objective is providing an AWS-like experience in the data center, Potti said, and the company isn’t trying to replicate the hyper-converged offerings Cisco and VMware are bringing to market.

The rising adoption of AWS and other public cloud services over the past five or six years has been tremendously helpful for Nutanix since many end users have already worked through departmental feuds and have grown accustomed to consuming a one-click service across the entire company, Potti said.

“It has emotionally built a cognitive bias toward an architecture like this,” Potti said. “We are looking to provide an Amazon-like experience for the global enterprise.”

Nutanix CE All In One

Nutanix CE是Nutanix社区版软件的简称,它是Nutanix企业版产品的功能精简集合,是体验和测试Nutanix技术的很方便的途径。

Nutanix Community Edition 社区版简介

Nutanix CE

这个产品目前的位置在 https://www.nutanix.com/products/community-edition/;目前这个页面还没有中文化,下面简单介绍以下。

  • Feature Rich Software 它是一个功能丰富的软件
  • Broad Hardware Support & Available On-demand 很丰富的硬件支持,在网上可以按需体验
  • Zero Cost 零成本

用Nutanix CE社区版体验,体验超融合技术的三个步骤。

  1. 注册 : 这次Nutanix社区,下载安装镜像
  2. 部署 : 在你的服务器上部署,或者在Ravello上在线开启体验;官方安装部署视频点这里
  3. 玩耍 : 安装完之后就可以开心地玩耍了,有问题请移步 社区版论坛

用物理机安装和体验的几点注意事项如下:

  • 物理机安装支持1,3,4个节点的部署;推荐内存在32GB以上;由于版本CE 2016.12.22的CVM的内存需求是24GB,由于加入了自服务门户功能;建议使用SSD硬盘,最好能混搭一些普通硬盘。
  • 安装后的首次启动需要系统能链接互联网,否则CVM会启动不了,首次启动成功之后就不用再联网了
  • 用虚拟机安装,请注意本机的内存,和给虚拟机分配的内存,网上也有修改对内存和CPU限制修改的脚本

产品在社区里的文档页面: 点这里

参考配置

Intel NUC 最新版一台,i7处理器,两条16GB内存,两条512GB硬盘。它的好处是便携;然而内存还是有限,不能跑多少个虚拟机。

相关文档

美国西岸旧金山之行

这次旅行的目的地是硅谷腹地圣何塞,工作了一周之后,周末在旧金山简单游玩了一下。

交通

从北京到硅谷(圣何塞)有了海航直飞的飞机还是很方便的,在淡季的时候票价也不错。如果你的目的地就在圣何塞市附近的话,乘坐这个航班无疑是最佳的选择。

IMG_2754

我是星期天从北京出发的,飞机经过了11个多小时的飞行后,抵达圣何塞的时间是:同一天的上午。由于机场离我住的酒店太近了,我就经历了人生第一次,从机场走路去酒店入住。

我在圣何塞待了大约5天,公司和酒店离的非常近,而且公司每天的餐食足够让人忘记去找饭馆吃饭这回事,这也是我首次出差这么不计较吃饭这回事。

值得一提的是,一天和一位华人同事聊天,得知他家附近有OUTLETS,当时就说您那天下班回家,求顺道带去。第二天我如约坐上了这位同事的宝马和他一同经历了一次,硅谷人的下班。没想到的不到五点,在从圣何塞去往东湾的路上就非常的堵了,正常情况下40分钟的路程走了快两个小时。不过好在我近半年多创业公司的经理起了大用处,有很多谈资,对于这位硅谷的资深前辈来说也非常有兴趣。

乘坐Uber替代租车是我本次出行最明智的选择,Uber选择拼车,如果和您同程的话,交通费用会非常的便宜。我的几次必要的打车经历都是用Uber解决的。只是最后一天回家的时候,到时出了状况,由于前一天晚上Uber升级,导致需要密码登录,而我的那个备用手机又没有带,最后只能请房东打了一辆正常的Yellow CAB出租车。

IMG_2837

乘坐Uber从圣何塞到旧金山市区的那一趟,让我在周五的下午再次领略了硅谷的堵车,足足走了3个多小时。从此我再也不觉得帝都是世界上唯一的堵车严重的城市了。

IMG_3106

我在旧金山期间的交通是乘坐公交电车,价格便宜又方便;不过也部分依靠了这个红色旅游巴士,两天45刀的价格倒也不便宜;买票的原因主要是,当我暴走至金门大桥的北岸的时候,觉得好像再也不能走回头路,而且附近并没有方便的公共交通工具。因此就上车了,不过对于第一天的市区游也是增色不少,一下子就让我对三藩市的整个情况摸了个七七八八,对第二天的行程非常有帮助。

IMG_3099

上图就是三藩市有名的叮当单车,其实就是有轨电车,由于保留了怀旧的车厢,因此一下子就成为了一个游客必选的项目。如果不是我买了CITY TOUR的票,我肯定也会去体验一把。

公司

Nutanix是我工作十几年来所加入的一家最年轻的公司,这家公司在16年9月刚刚IPO,公司只有六七年的时间,超融合架构这块独特的市场空间可以说是他们创造的。

IMG_2757

公司目前还在一栋很平常的办公楼中,由于人员扩张的比较快,左侧的那栋楼也几层入住了。公司的文化还保留了比较浓的创业公司的气质,这么说是由于,公司的中高层大部分已经被来自于:VMWare、EMC、NetApp、Dell、HP等大公司的职业经理人们占据了,公司的执行层面上是不折不扣的职业经理人负责的路线,几天的培训下来,感觉所有的人都赶紧十足,大家对市场和机遇的感受和我们之前经历的所有公司都是不同的。很多本次一起来的同事中,有的是已经有一两年的Nutanix产品经验的,有的已经入职了半年左右的;相比之下我是第二周上班的新丁,在几天的培训过程中,我也是只有沉默的份。

IMG_2790 两个大冰箱,左边其实还有一个冰箱一个冰柜,里面都是各种饮料和食品。 IMG_2791 这些小货架上是零食和水果。这间屋子有人搭理,会及时补充缺少了的食品。这几天的培训管两餐,在加上这些零食和补给,基本上晚餐不吃也行了。到下班的时候,可以看见有人在这些货架和冰箱里自然地往背包里装东西,后来的两天里面我晚餐基本没有出去,和很快学会这种行为也很有关,晚餐不吃的另外一个原因是时差。这次我经理了前所未有的严重的时差,几乎每天下午6点左右就困得快要昏倒,到凌晨3点左右肯定是清醒的。当然这些食品肯定比不上一线的互联网公司,可是也基本上赶超了我之前所经理过的所有的传统公司的情况。工作相关的内容就此打住。

游玩

我住在了教会区的一个非常安静的民宅里面,是租的Airbnb的房间。房间的外形如下所示。

IMG_3234

家里也非常整洁、宽敞和温馨。这家附近有公交车,30分钟可以到达渔人码头所在的北滩。我第一天的路线基本上是从渔人码头沿着海边不行到进门桥下,上桥,从桥上不行到桥的北岸,最CITY TOUR红色双层BUS,回城区,然后横穿过金门公园内部的一部分,在回到联合广场。之后我有步行到了三藩市的China Town吃饭。一天的暴走下来,其实对这个城市的感觉还是非常不错的。有几个可圈可点之处:

  • 特色景点比较多,这里不一一枚举,都是大家耳熟能详的
  • 城市地形高低起伏,建筑物各具特色,错落有致,绝无前篇一律的感觉。
  • 城市的色彩丰富,特别是那些个精彩的墙壁彩绘

金门大桥

随拍的几张图片如下。

IMG_2992

IMG_2979

IMG_2939

慕名上桥走一圈的游客还是蛮多的。距离对于我这样的暴走一组来说,也没啥强度。可以感谢的是苍天有眼,让我在这抑郁、寒冷的冬季,让我待了一周已经对天气绝望的心境再次激活。后来要走的几天里,天天是艳阳高照万里无云。

其他景点

渔老人码头比我想象中的小,而且由于并非捕鱼的季节,有赶上是寒冷的冬天,真的也就是觉得是到此一游,没有看到什么特殊的地方。 IMG_3116

China Town的墙壁彩绘,不像是人随手为之,而这样质量的墙壁彩绘在三藩市还有很多很多。

IMG_3085

金门公园是我这个跑步爱好者的毕竟之地,由于本次身体状态不佳,没有跑在这个公园里面;而是从东门到西门的暴走了一趟。公园的面积其实和纽约的中央公园差不多,也就是从东到西大约5公里左右。

公园里面被分成了很多不同的区域,正值周末各种狗友、航模、跑步、飞碟、航模等等的爱好者聚集在自己的区域里面消遣着周末时光。

IMG_3200

走到了三藩市西侧的海边,面朝大海的位置离注明的悬崖屋餐厅不远,我可以清晰的看到那个餐厅的建筑物。

IMG_3211

斯坦福

对于我来说各个著名的大学有很深的吸引力,想想这所大学和硅谷的IT行业是有多麽重要和紧密的联系。

IMG_2819

上图是校园里著名的教堂,是一座美丽而古典的建筑,是校园的心脏。来这里游览应该从南边的正面进入,直接走到这个教堂,然后在看其他的部分。

美食

酸面包算是渔人码头景点的美食之一,性价比很高。 IMG_3125

面包产自下面这家店。 IMG_3174

最后的一次正餐,选择了北滩海景叫做Dinner的餐厅,主要吃个海景;餐食炸鱼陪薯条。 IMG_3233

压轴的是这个早餐,点名叫“8 AM”;三藩市早餐分类在点评网站上排第五。吐司三吃。 IMG_2847

DevOps这个词在去年参加红帽全球用户大会的时候就深深吸引了我,实际上哪个会上Docker容器的概念要比DevOps还火爆。Docker/openshift相关的session都尝尝是爆满的。从那里开始我逐渐感觉到了开源容器技术的强大和吸引力。

从红帽开始OpenShift的考试就是我在完成RHCA红帽认证架构师之后的一个心结,至今也没有完成。不过这根草我早晚是要拔掉的。主要是由于OpenShift是Docker + kubernetes 的组合;是如今企业级PaaS容器平台的主要技术路线。总之离开红帽是如此的仓促,说实话这也是我职业生涯中的一个不小的遗憾。当时确实觉得 kubernetes 的命令行操作不是很方便,而且在OpenShift并没有降低这个门槛,也即是说在OpenShift里面还是要有一定的工作量和技能的要求在编写kubernetes的yml文件上。在这一点上,及时我熟练掌握了Rancher之后,同样发现编写compose file也是难以逃避的。在推广一步,大部分Docker PaaS平台也都是这样,很多产品也是在界面上提供一个文本输入框,让人输入容器服务定义文件的内容。

在最近的半年中,我的所有技术研究都集中在Docker和其服务编排技术上。与很多用户做过技术交流,PoC测试,有些单子也落地。总结后,有些结果让我感叹。国内的所有企业不区分规模和行业,其实他们对国内原生的创业公司是欢迎的,由于这些公司提供的是国产软件和技术服务。在Docker这个火热的领域中,已经有20多家国内创业公司,我想所有的公司也都已经接受到了这一点的福利了。外国软件通常给人的感觉是:不是国产软件(不要小看国内公司对国产软件的诉求),纯英文操作界面和文档,可能的水土不服,高昂的软件价格和服务费,如果技术太新的化很可能厂商也不具备足够的技术实力和服务力量。

经过了一些Docker容器项目之后,可以断言的是容器市场的火爆和它的技术优势是直接相关的。容器化之后的应用可以通过服务编排工具快速地部署/更新、弹性地伸缩和使用资源,优化其传统应用运维的若干缺陷。容器的轻量和just enough的隔离技术让资源池的管理更加简单,利用率大幅度提升,这对研发部门的环境管理是不小的提升,使CI的过程更加高效和经济。Docker对微服务的支持也深深地诱惑了所有开发者,做系统微服务实施开发者能想到的实施技术大多数会是容器。

以上容器的优势和特性使得国内的这些项目落地和实施的可能性进一步提高,甚至很多项目的速度远远超预期;按照我多年的经验看,一个软件技术型的项目,用户纠结半年到一年以上是很正常的。可能也跟国内企业包容本土化软件公司,追捧新潮技术直接相关;我观察到的一些项目,在2~4个月内落单的屡见不鲜。有些试点的DevOps咨询项目也落地很快。

这些项目都殊途同归地指向了DevOps这个关键词,这让我不得不从去年开始就关注和学习这个最佳实践。当然,我对DevOps的前途非常看好,因此当我听说业内出现了相关认证考试之后,我毫不犹豫地报名参加了。经过2个多月的缜密的准备,我终于幸运地一次通过了这个考试。考试获得了两个证书。

exin-devops-master-cert-martin-liu DevOps Master

DevOps Master 认证自由讲师 DevOps Master 认证自由讲师

我参加的是讲师认证培训TTT,很高兴能成为Exin在国内的首批5个认证人员之一。在准备这个考试的过程中我学习了一些书籍,现在还在深度学习的书有两本。

14678659256890

我完成了这本黑皮书的读书笔记,很遗憾的是,我发现它的最新版,把封面改成了白底的了,我不能在叫它黑皮书/黑宝书了。这本书我起码看了两遍;目前正在调试它的书中的代码,代码中的营养还是很高的,计划尽快把所有代码调试通过;从而完成我许下多次的线上分享本书的诺言。

14749866044733

这本书被我称为CD红皮书/红宝书。本书早在10年就出版了,也就是说比Docker早好多年。他给我最大的印象就是,作者每一页上似乎都在介绍这做事情的原则和规矩是什么?我一点也不夸张,他对CD的介绍,就是通过讲解一系列在项目上的经验总结。对作者这种级别的经验,和写书的房子只能用一个词总结“服”。这本书太干,我至今还没有消化完。他让我看到了解决发布和变更风险的终极解决方案,没有一次性解决问题的部署/配置/发布工具,有的是历练和打磨了千万次的持续部署流水线;隐约地觉得没用入手的企业都会慢慢跟上的。

以上是我对DevOps的阶段性总结,跨度有半年之久。这半年中我逐渐看清了我的主要兴趣点,抛除所有其他主题,目前剩下的就是:云计算和DevOps。一方面觉得年纪不饶人,不能可能在和年轻人拼精力、体力和创意;我的背景和经验都让我感觉,在这两个话题上,我还是有很多年的经验和技术积累和总结的。云计算是(公有云+私有云)未来企业IT基础架构的走向;DevOps是目前看比较正确的运作实践。一个便技术,一个便管理,正好完整覆盖了我的经验;在其对应的开源技术这个分支里,我想它们都还有这很多的为探索和研究的项目。

测试环境说明

我的笔记本电脑的环境描述如下。

OS

MacBook Pro 2011 版, 2.3 GHz Intel Core i5, 8GB DDR3, 256 GB SSD。 OS X EI Capitan version 10.11.5

Docker

Docker for Mac Version 1.12.0-rc2-beta17 (build: 9779)

$ docker version
Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   a7119de
 Built:        Wed Jun 29 10:03:33 2016
 OS/Arch:      linux/amd64
 Experimental: true
 
  $ docker-machine version
docker-machine version 0.8.0-rc1, build fffa6c9

martin@localhost ~/Documents                                                                                  [9:38:31]
 $ docker-compose version
docker-compose version 1.8.0-rc1, build 9bf6bc6
docker-py version: 1.8.1
CPython version: 2.7.9
OpenSSL version: OpenSSL 1.0.2h  3 May 2016


VirtualBox version 5.0.22r108108

本机下载的 Docker 镜像

/Users/martin/Downloads/1.12.0-rc2/boot2docker.iso

~/Downloads/rancher-all/rancher-agent-v1.0.1.tar

~/Downloads/rancher-all/rancher-agent-instance-v0.8.1.tar

~/Downloads/habitat-docker-registry.bintray.io-studio.tar

~/Downloads/rancher-all/rancher-server-stable.tar

我本机还有一个 Docker Registry 的 vm,这里面提供了我需要积累以后用的镜像存储,想象一下你在飞机上的时候去哪里拉取镜像 :)

本机下载的代码

https://github.com/habitat-sh/habitat-example-plans https://github.com/janeczku/habitat-plans https://github.com/chrisurwin/may2016-demo https://github.com/docker/example-voting-app

注意以上代码可能需要修改才能在本机调试成功。

创建 Rancher 服务器

生成虚拟机

用 docker-machine 创建 rancher 服务器。

docker-machine create rancher --driver virtualbox --virtualbox-cpu-count "1" --virtualbox-disk-size "8000" --virtualbox-memory "1024" --virtualbox-boot2docker-url=/Users/martin/Downloads/1.12.0-rc2/boot2docker.iso && eval $(docker-machine env rancher)

导入 Rancher 服务器镜像

用 docker-machine ls 应该看到 rancher 这个节点打了星号。否则 docker 命令会执行失败或者错误。

docker load < ~/Downloads/rancher-all/rancher-server-stable.tar
docker run -d --restart=always --name rancher-srv -p 8080:8080 rancher/server:stable 
docker logs -f rancher-srv

查看 rancher 服务器的 ip 地址。 docker-machine ip rancher

用浏览器打开Rancher 服务器的登录页面。 open http://Rancher_Server_IP:8080

下面是一些如何让虚拟机保持固定 IP 和 rancher 容器存储的数据持久存在的代码,我没有测试成功,留下大家一起搞,成功了,给我一个回复。另外还有关于稿 jekins 和 mirror 的代码。

echo "ifconfig eth1 192.168.99.60 netmask 255.255.255.0 broadcast 192.168.99.255 up" | docker-machine ssh node1 sudo tee /var/lib/boot2docker/bootsync.sh > /dev/null
docker-machine regenerate-certs node1 -f
docker-machine ssh ndoe1
sudo mkdir /mnt/sda1/var/lib/rancher 


docker@node1:/mnt/sda1/var/lib/boot2docker$ cat bootsync.sh
ifconfig eth1 192.168.99.60 netmask 255.255.255.0 broadcast 192.168.99.255 up

sudo mkdir /mnt/sda1/var/lib/rancher/
sudo ln -s /mnt/sda1/var/lib/rancher/ /var/lib/



docker load < ~/Downloads/rancher-all/jenkins.tar

docker run -d --name jekins --privileged -p 9001:8080 -v /var/lib/docker/:/var/lib/docker/ -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker  -v /lib64/libdevmapper.so.1.02:/usr/lib/libdevmapper.so.1.02  --label io.rancher.container.network=true jenkins


docker-machine create mirror --driver virtualbox --virtualbox-cpu-count "1" --virtualbox-disk-size "8000" --virtualbox-memory "512" --virtualbox-boot2docker-url=/Users/martin/Downloads/boot2docker.iso 

docker load < ~/Downloads/rancher-all/registry.tar
docker run -d -p 80:5000 --restart=always --name registry registry:2

容器运行节点 Rancher Agent 节点

创建 node1 虚拟机

使用 docker-machine 命令创建容器运行节点。

docker-machine create node1 --driver virtualbox --engine-insecure-registry 192.168.99.20:5000 --virtualbox-cpu-count "1" --virtualbox-disk-size "80000" --virtualbox-memory "1024" --virtualbox-boot2docker-url=/Users/martin/Downloads/1.12.0-rc2/boot2docker.iso 

docker-machine create node2 --driver virtualbox --engine-insecure-registry 192.168.99.20:5000 --virtualbox-cpu-count "1" --virtualbox-disk-size "80000" --virtualbox-memory "1024" --virtualbox-boot2docker-url=/Users/martin/Downloads/1.12.0-rc2/boot2docker.iso 

在 node1 或者 node2 测试运行一个容器,使用 mirror 中的 busybox 镜像。如果你的笔记本内存小于8GB 的话,node2就别搞了。一个 node 也够用了。

docker pull 192.168.99.20:5000/busybox:latest 

docker run 一下这个镜像,验证 node1工作正常。

加载 Rancher Agent 的镜像

确保 node1 是 docker-machine ls 中打星号的。

docker load < ~/Downloads/rancher-all/rancher-agent-v1.0.1.tar
docker load < ~/Downloads/rancher-all/rancher-agent-instance-v0.8.1.tar
docker load < ~/Downloads/habitat-docker-registry.bintray.io-studio.tar

你翻墙下载回来的habitat-docker-registry.bintray.io/studio镜像可能需要打标签,否则回头 hab 命令执行失败。 先用 docer images 看下是否所有 image 的标签信息正确。

docker tag fc27342e5e0e habitat-docker-registry.bintray.io/studio:latest

添加 node1 节点到 Rancher Server。

docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.0.1 http://192.168.99.100:8080/v1/scripts/33B68ED65CEF18F6D7BD:1466694000000:lug2KswoXOOQV4d09ZNMGTphVs0

注意:以上命令需要到您的 Rancher Server 的页面上获取,否则参数都是不对的。

现在在 Hosts 页面上应该能看到该刚创建的节点。在页面上创建一个最小化的容器(如 busybox),来拉起 Network Agent 容器。

调试 habitat 的测试程序

参考的文档 https://www.habitat.sh/tutorials/ 记得一定要把这一组文章先看完,在去调试它的代码,不懂这些基本概念的话,后面对了错了都不知道该怎么搞。

前置条件,翻墙下载 habitat studio 的 docker image 镜像,load 到 node1 上,下面的所有测试都是在 node1 上完成的。

安装 hab

habitat 的程序只有一个可执行程序, 目前支持 mac 和 linux 版本。下载地址: https://www.habitat.sh/docs/get-habitat 就是一个 tag 包,解压缩后放到 shell 的 PATH 里面就安装完了。

配置 hab cli

运行 hab setup

martin@localhost ~/Documents                                                                                  $ hab setup

Habitat CLI Setup
=================

  Welcome to hab setup. Let's get started.

Set up a default origin

  Every package in Habitat belongs to an origin, which indicates the
  person or organization responsible for maintaining that package. Each
  origin also has a key used to cryptographically sign packages in that
  origin.

  Selecting a default origin tells package building operations such as
  'hab pkg build' what key should be used to sign the packages produced.
  If you do not set a default origin now, you will have to tell package
  building commands each time what origin to use.

  For more information on origins and how they are used in building
  packages, please consult the docs at
  https://www.habitat.sh/docs/build-packages-overview/

Set up a default origin? [Yes/no/quit] yes 这里输入 yes

  Enter the name of your origin. If you plan to publish your packages
  publicly, we recommend that you select one that is not already in use on
  the Habitat build service found at https://app.habitat.sh/.

  You already have a default origin set up as `martin', but feel free to
  change it if you wish.

Default origin name: [default: martin] 这是用来做 Habitat 包签名和加密用的标识,在代码里面会用到。

  You already have an origin key for martin created and installed. Great
  work!

GitHub Access Token

  While you can build and run Habitat packages without sharing them on the
  public depot, doing so allows you to collaborate with the Habitat
  community. In addition, it is how you can perform continuous deployment
  with Habitat.

  The depot uses GitHub authentication with an access token
  (https://help.github.com/articles/creating-an-access-token-for-command-line-use/).

  If you would like to share your packages on the depot, please enter your
  GitHub access token. Otherwise, just enter No.

  For more information on sharing packages on the depot, please read the
  documentation at https://www.habitat.sh/docs/share-packages-overview/

Set up a default GitHub access token? [Yes/no/quit] yes 这里选择yes

  Enter your GitHub access token.

  You already have a default auth token set up, but feel free to change it
  if you wish.

GitHub access token: [default: martin-github-token]  这个 token 需要自己去 github 里面生成

Analytics

  The `hab` command-line tool will optionally send anonymous usage data to
  Habitat's Google Analytics account. This is a strictly opt-in activity
  and no tracking will occur unless you respond affirmatively to the
  question below.

  We collect this data to help improve Habitat's user experience. For
  example, we would like to know the category of tasks users are
  performing, and which ones they are having trouble with (e.g. mistyping
  command line arguments).

  To see what kinds of data are sent and how they are anonymized, please
  read more about our analytics here:
  https://www.habitat.sh/docs/about-analytics/

Enable analytics? [yes/No/quit] no 这里选择 no
» Opting out of analytics
☑ Creating /Users/martin/.hab/cache/analytics/OPTED_OUT
★ Analytics opted out, we salute you just the same!

CLI Setup Complete

  That's all for now. Thanks for using Habitat!


martin@localhost ~/Documents                                                                                 $

该注意的都写到上面的代码里面了。这个配置的结果在这里

$ cat ~/.hab/etc/cli.toml
auth_token = "martin-github-token"
origin = "martin"

调试 Habitat demo 应用

git clone https://github.com/habitat-sh/habitat-example-plans

进入到 mytutorialapp 目录,修改 plan.sh 的 第二行代码,我改后的代码是

pkg_origin=martin

martin 是我在 hab cli 里面配置的 origin。

其实下面的测试就执行了两个 hab 的命令,都是在 hab studi 的 shell里面执行的,这个 shell 其实就是一个studio 容器的 shell。

在运行下面的命令,确保你是和 node1正常通讯的,下面我用 default 节点做演示。做完的演示环境我已经删除了。

$ docker-machine ls                                                                                      NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
default   -        virtualbox   Running   tcp://192.168.99.100:2376           v1.11.1

$ docker ps                                                                                              CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES


正常的意思是执行所有 docker 命令不报错。

build habitat demo 代码

进入到代码的 plan.sh 的目录 执行 hab studio enter 命令。

martin@localhost ~/Documents/GitHub/habitat-example-plans/mytutorialapp                                      
 $ hab studio enter                                                                                       [±master ●●]
   hab-studio: Creating Studio at /hab/studios/src (default)
   hab-studio: Importing martin secret origin key
» Importing origin key from standard input
★ Imported secret origin key martin-20160630040241.
   hab-studio: Entering Studio at /hab/studios/src (default)
   hab-studio: Exported: HAB_ORIGIN=martin

[1][default:/src:0]# build
   : Loading /src/plan.sh
   mytutorialapp: Plan loaded
   mytutorialapp: hab-plan-build setup
   mytutorialapp: Using HAB_BIN=/hab/pkgs/core/hab/0.7.0/20160614230104/bin/hab for installs, signing, and hashing
   mytutorialapp: Resolving dependencies
» Installing core/node
→ Using core/gcc-libs/5.2.0/20160612075020
→ Using core/glibc/2.22/20160612063629
→ Using core/linux-headers/4.3/20160612063537
↓ Downloading core/node/4.2.6/20160612143531
    6.44 MB / 6.44 MB \ [=======================================================================] 100.00 % 457.82 KB/s  ↓ Downloading core-20160612031944 public origin key
    75 B / 75 B | [=============================================================================] 100.00 % 575.13 KB/s  ☑ Cached core-20160612031944 public origin key
✓ Installed core/node/4.2.6/20160612143531
★ Install of core/node complete with 4 packages installed.
   mytutorialapp: Resolved dependency 'core/node' to /hab/pkgs/core/node/4.2.6/20160612143531
   mytutorialapp: Setting PATH=/hab/pkgs/core/node/4.2.6/20160612143531/bin:/hab/pkgs/core/hab-plan-build/0.7.0/20160614232259/bin:/hab/pkgs/core/bash/4.3.42/20160612075613/bin:/hab/pkgs/core/binutils/2.25.1/20160612064534/bin:/hab/pkgs/core/bzip2/1.0.6/20160612075040/bin:/hab/pkgs/core/coreutils/8.24/20160612075329/bin:/hab/pkgs/core/file/5.24/20160612064523/bin:/hab/pkgs/core/findutils/4.4.2/20160612080341/bin:/hab/pkgs/core/gawk/4.1.3/20160612075739/bin:/hab/pkgs/core/grep/2.22/20160612075540/bin:/hab/pkgs/core/gzip/1.6/20160612080637/bin:/hab/pkgs/core/hab/0.7.0/20160614230104/bin:/hab/pkgs/core/sed/4.2.2/20160612075228/bin:/hab/pkgs/core/tar/1.28/20160612075701/bin:/hab/pkgs/core/unzip/6.0/20160612081414/bin:/hab/pkgs/core/wget/1.16.3/20160612081342/bin:/hab/pkgs/core/xz/5.2.2/20160612080402/bin:/hab/pkgs/core/acl/2.2.52/20160612075215/bin:/hab/pkgs/core/attr/2.4.47/20160612075207/bin:/hab/pkgs/core/glibc/2.22/20160612063629/bin:/hab/pkgs/core/less/481/20160612080021/bin:/hab/pkgs/core/libcap/2.24/20160612075226/bin:/hab/pkgs/core/libidn/1.32/20160612081104/bin:/hab/pkgs/core/ncurses/6.0/20160612075116/bin:/hab/pkgs/core/openssl/1.0.2h/20160612081127/bin:/hab/pkgs/core/pcre/8.38/20160612075520/bin
mkdir: created directory '/hab/cache/src'
   mytutorialapp: Downloading 'https://s3-us-west-2.amazonaws.com/mytutorialapp/mytutorialapp-0.1.0.tar.gz' to 'mytutorialapp-0.1.0.tar.gz'
--2016-07-01 02:27:51--  https://s3-us-west-2.amazonaws.com/mytutorialapp/mytutorialapp-0.1.0.tar.gz
Resolving s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)... 54.231.184.216
Connecting to s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)|54.231.184.216|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1041 (1.0K) [application/x-gzip]
Saving to: 'mytutorialapp-0.1.0.tar.gz'

mytutorialapp-0.1.0.tar.gz    100%[===================================================>]   1.02K  --.-KB/s   in 0.03s

2016-07-01 02:28:08 (32.1 KB/s) - 'mytutorialapp-0.1.0.tar.gz' saved [1041/1041]

   mytutorialapp: Downloaded 'mytutorialapp-0.1.0.tar.gz'
   mytutorialapp: Verifying mytutorialapp-0.1.0.tar.gz
   mytutorialapp: Checksum verified for mytutorialapp-0.1.0.tar.gz
   mytutorialapp: Clean the cache
   mytutorialapp: Unpacking mytutorialapp-0.1.0.tar.gz
   mytutorialapp: Setting build environment
   mytutorialapp: Setting PREFIX=/hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725
   mytutorialapp: Setting LD_RUN_PATH=/hab/pkgs/core/node/4.2.6/20160612143531/lib
   mytutorialapp: Setting CFLAGS=-I/hab/pkgs/core/node/4.2.6/20160612143531/include
   mytutorialapp: Setting LDFLAGS=-L/hab/pkgs/core/node/4.2.6/20160612143531/lib
   mytutorialapp: Preparing to build
   mytutorialapp: Building
npm WARN package.json mytutorialapp@0.1.0 No repository field.
npm WARN package.json mytutorialapp@0.1.0 No README data
nconf@0.8.4 node_modules/nconf
├── ini@1.3.4
├── secure-keys@1.0.0
├── async@1.5.2
└── yargs@3.32.0 (decamelize@1.2.0, camelcase@2.1.1, window-size@0.1.4, y18n@3.2.1, os-locale@1.4.0, cliui@3.2.0, string-width@1.0.1)
   mytutorialapp: Installing
'node_modules/nconf' -> '/hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725/node_modules/nconf'

忽略了好几百行输出

'node_modules/nconf/node_modules/secure-keys/test/simple-test.js' -> '/hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725/node_modules/nconf/node_modules/secure-keys/test/simple-test.js'
'node_modules/nconf/node_modules/secure-keys/test/test.secret.key' -> '/hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725/node_modules/nconf/node_modules/secure-keys/test/test.secret.key'
'node_modules/nconf/node_modules/secure-keys/package.json' -> '/hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725/node_modules/nconf/node_modules/secure-keys/package.json'
   mytutorialapp: Writing configuration
   mytutorialapp: Writing service management scripts
   mytutorialapp: Stripping unneeded symbols from binaries and libraries
   mytutorialapp: Creating manifest
   mytutorialapp: Building package metadata
   mytutorialapp: Generating blake2b hashes of all files in the package
   mytutorialapp: Generating signed metadata FILES
» Signing mytutorialapp_blake2bsums
☛ Signing mytutorialapp_blake2bsums with martin-20160630040241 to create /hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725/FILES
★ Signed artifact /hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725/FILES.
   mytutorialapp: Generating package artifact
/hab/pkgs/core/tar/1.28/20160612075701/bin/tar: Removing leading `/' from member names
/hab/cache/artifacts/.martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.tar (1/1)
  100 %       121.4 KiB / 900.0 KiB = 0.135
» Signing /hab/cache/artifacts/.martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.tar.xz
☛ Signing /hab/cache/artifacts/.martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.tar.xz with martin-20160630040241 to create /hab/cache/artifacts/martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.hart
★ Signed artifact /hab/cache/artifacts/martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.hart.
'/hab/cache/artifacts/martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.hart' -> '/src/results/martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.hart'
   mytutorialapp: hab-plan-build cleanup
   mytutorialapp:
   mytutorialapp: Source Cache: /hab/cache/src/mytutorialapp-0.1.0
   mytutorialapp: Installed Path: /hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725
   mytutorialapp: Artifact: /src/results/martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.hart
   mytutorialapp: Build Report: /src/results/last_build.env
   mytutorialapp: SHA256 Checksum: d4bfb3a44989b8a5b1295eac2600d75f42dd2be6f537344312c8917cba47d05d
   mytutorialapp: Blake2b Checksum: fbff257eb36fffa61e6cbf5ec89fa3f507095f80f5cca610c2bb72685d758706
   mytutorialapp:
   mytutorialapp: I love it when a plan.sh comes together.
   mytutorialapp:
   mytutorialapp: Build time: 1m3s
[2][default:/src:0]#

检查结果,在代码的目录中可以看的 result 目录,关注一下这个目录,关键看 build 命令的最后一段。


mytutorialapp: hab-plan-build cleanup mytutorialapp: mytutorialapp: Source Cache: /hab/cache/src/mytutorialapp-0.1.0 mytutorialapp: Installed Path: /hab/pkgs/martin/mytutorialapp/0.1.0/20160701022725 mytutorialapp: Artifact: /src/results/martin-mytutorialapp-0.1.0-20160701022725-x86_64-linux.hart mytutorialapp: Build Report: /src/results/last_build.env mytutorialapp: SHA256 Checksum: d4bfb3a44989b8a5b1295eac2600d75f42dd2be6f537344312c8917cba47d05d mytutorialapp: Blake2b Checksum: fbff257eb36fffa61e6cbf5ec89fa3f507095f80f5cca610c2bb72685d758706 mytutorialapp: mytutorialapp: I love it when a plan.sh comes together. mytutorialapp: mytutorialapp: Build time: 1m3s

昨晚分享的高潮部分, habitat 导出 docker image

其实就是一条命令,在 habitat Studio 中执行 导出命令 hab pkg export docker martin/mytutorialapp

导出 docker 镜像的过程和 build 的过程一样,都可能会失败;由于它需要到网上下载所需要的代码,下载所需要的 habitat 模块,core/ 开头的都是 habitat出品的核心的模块,他们的想法基本也是说把所有的可能用到的模块都做封装,成为自己的 pkg 格式的内容。然后在用他们的 Habitat 服务来解析、部署和运行。

[4][default:/src:0]#  hab pkg export docker martin/mytutorialapp
   hab-studio: Creating Studio at /tmp/hab-pkg-dockerize-XxsS/rootfs (baseimage)
 Using local package for martin/mytutorialapp
 Using local package for core/gcc-libs/5.2.0/20160612075020 via martin/mytutorialapp
 Using local package for core/glibc/2.22/20160612063629 via martin/mytutorialapp
 Using local package for core/linux-headers/4.3/20160612063537 via martin/mytutorialapp
 Using local package for core/node/4.2.6/20160612143531 via martin/mytutorialapp
» Installing core/hab
↓ Downloading core/hab/0.7.0/20160614230104
    2.23 MB / 2.23 MB / [=======================================================================] 100.00 % 500.60 KB/s  ↓ Downloading core-20160612031944 public origin key
    75 B / 75 B | [=============================================================================] 100.00 % 378.76 KB/s  ☑ Cached core-20160612031944 public origin key
✓ Installed core/hab/0.7.0/20160614230104
★ Install of core/hab complete with 1 packages installed.
» Installing core/hab-sup
↓ Downloading core/busybox-static/1.24.2/20160612081725
    510.89 KB / 510.89 KB | [====================================================================] 100.00 % 89.61 KB/s  ✓ Installed core/busybox-static/1.24.2/20160612081725
↓ Downloading core/bzip2/1.0.6/20160612075040
    141.05 KB / 141.05 KB - [===================================================================] 100.00 % 349.94 KB/s  ✓ Installed core/bzip2/1.0.6/20160612075040
↓ Downloading core/cacerts/2016.04.20/20160612081125
    132.32 KB / 132.32 KB | [===================================================================] 100.00 % 370.21 KB/s  ✓ Installed core/cacerts/2016.04.20/20160612081125
→ Using core/gcc-libs/5.2.0/20160612075020
→ Using core/glibc/2.22/20160612063629
↓ Downloading core/libarchive/3.2.0/20160612140528
    584.98 KB / 584.98 KB | [===================================================================] 100.00 % 340.75 KB/s  ✓ Installed core/libarchive/3.2.0/20160612140528
↓ Downloading core/libsodium/1.0.8/20160612140317
    187.96 KB / 187.96 KB \ [===================================================================] 100.00 % 200.27 KB/s  ✓ Installed core/libsodium/1.0.8/20160612140317
→ Using core/linux-headers/4.3/20160612063537
↓ Downloading core/openssl/1.0.2h/20160612081127
    2.10 MB / 2.10 MB | [=======================================================================] 100.00 % 518.78 KB/s  ✓ Installed core/openssl/1.0.2h/20160612081127
↓ Downloading core/xz/5.2.2/20160612080402
    247.38 KB / 247.38 KB \ [===================================================================] 100.00 % 468.42 KB/s  ✓ Installed core/xz/5.2.2/20160612080402
↓ Downloading core/zlib/1.2.8/20160612064520
    73.06 KB / 73.06 KB / [=====================================================================] 100.00 % 315.44 KB/s  ✓ Installed core/zlib/1.2.8/20160612064520
↓ Downloading core/hab-sup/0.7.0/20160614232939
    1.54 MB / 1.54 MB | [=======================================================================] 100.00 % 563.90 KB/s  ✓ Installed core/hab-sup/0.7.0/20160614232939
★ Install of core/hab-sup complete with 12 packages installed.
» Symlinking hab from core/hab into /tmp/hab-pkg-dockerize-XxsS/rootfs/hab/bin
★ Binary hab from core/hab/0.7.0/20160614230104 symlinked to /tmp/hab-pkg-dockerize-XxsS/rootfs/hab/bin/hab
» Symlinking bash from core/busybox-static into /tmp/hab-pkg-dockerize-XxsS/rootfs/bin
★ Binary bash from core/busybox-static/1.24.2/20160612081725 symlinked to /tmp/hab-pkg-dockerize-XxsS/rootfs/bin/bash
» Symlinking sh from core/busybox-static into /tmp/hab-pkg-dockerize-XxsS/rootfs/bin
★ Binary sh from core/busybox-static/1.24.2/20160612081725 symlinked to /tmp/hab-pkg-dockerize-XxsS/rootfs/bin/sh
Sending build context to Docker daemon 194.3 MB
Step 1 : FROM scratch
 --->
Step 2 : ENV export PATH=:/hab/pkgs/core/glibc/2.22/20160612063629/bin:/hab/pkgs/core/node/4.2.6/20160612143531/bin:/hab/pkgs/core/hab-sup/0.7.0/20160614232939/bin:/hab/pkgs/core/busybox-static/1.24.2/20160612081725/bin:/hab/pkgs/core/bzip2/1.0.6/20160612075040/bin:/hab/pkgs/core/glibc/2.22/20160612063629/bin:/hab/pkgs/core/openssl/1.0.2h/20160612081127/bin:/hab/pkgs/core/xz/5.2.2/20160612080402/bin:/hab/pkgs/core/busybox-static/1.24.2/20160612081725/bin:/hab/bin
 ---> Running in 117e90c151e7
 ---> 7f33585a25ae
Removing intermediate container 117e90c151e7
Step 3 : WORKDIR /
 ---> Running in 952257966d96
 ---> c0cf3715cbcf
Removing intermediate container 952257966d96
Step 4 : ADD rootfs /
 ---> d2691da93ccf
Removing intermediate container 4aa80e97ea57
Step 5 : VOLUME /hab/svc/mytutorialapp/data /hab/svc/mytutorialapp/config
 ---> Running in f1edcb653432
 ---> bd8888453939
Removing intermediate container f1edcb653432
Step 6 : EXPOSE 9631 8080
 ---> Running in 9ca7725ed13e
 ---> 256a04cd0fe2
Removing intermediate container 9ca7725ed13e
Step 7 : ENTRYPOINT /init.sh
 ---> Running in 81930dff8f4e
 ---> d7bdec08530e
Removing intermediate container 81930dff8f4e
Step 8 : CMD start martin/mytutorialapp
 ---> Running in c8cc53d92bc8
 ---> 8d5e0fe85395
Removing intermediate container c8cc53d92bc8
Successfully built 8d5e0fe85395
[5][default:/src:0]#

查看 docker 镜像是否存在。推出 studio 容器,运行 docker images

[5][default:/src:0]# exit
logout

martin@localhost ~/Documents/GitHub/habitat-example-plans/mytutorialapp                                       $ docker images                                                                                          [±master ●●]
REPOSITORY                                  TAG                    IMAGE ID            CREATED             SIZE
martin/mytutorialapp                        0.1.0-20160701024401   8d5e0fe85395        3 minutes ago       187.7 MB
martin/mytutorialapp                        latest                 8d5e0fe85395        3 minutes ago       187.7 MB


运行这个 demo

在命令行运行

$ docker run -it -p 8080:8080 martin/mytutorialapp

用浏览器打开 node1的 ip 8080 端口,应该可以看到 hello world 页面。

在 rancher web 页面添加测试

在 rancher 中上架这个 demo

https://github.com/martinliu/hab-catalog 以上代码是半成品,欢迎协助完成。

测试 Rancher 官方的 redis demo

参考文章 http://rancher.com/using-habitat-to-create-rancher-catalog-templates/ 它的 demo 和上架的目录都可以正常测试通过,但是服务运行不起来,报主机名错误,redis 节点的群集建立不起来。

如果您修复了,请回复贴出代码位置。

福利:调试 docker 官方投票应用

下载投票实例程序。

git clone https://github.com/martinliu/example-voting-app.git

进入该程序的目录,修改所有 image 的来源镜像库,修改为指向本地的 mirror 服务器。 1. result/tests/Dcokerfile -> FROM 192.168.99.20:5000/node 2. result/Dockerfile -> FROM 192.168.99.20:5000/node:5.11.0-slim 3. vote/Dockerfile -> FROM 192.168.99.20:5000/python:2.7-alpine 4. worker/Dockerfile -> FROM 192.168.99.20:5000/microsoft/dotnet:1.0.0-preview1 5. docker-compose.yml -> image: 192.168.99.20:5000/redis:alpine 6. docker-compose.yml -> image: 192.168.99.20:5000/postgres:9.4

由于以上应用在构建的过程中需要在线安装各种软件包,最好先翻墙,确认你有足够稳定的国外的互联网访问,建议翻墙到美国,然后在执行项目的构建命令。

ping facebook.com
64 bytes from 173.252.90.132: icmp_seq=0 ttl=79 time=4187.066 ms
64 bytes from 173.252.90.132: icmp_seq=1 ttl=79 time=3186.904 ms
64 bytes from 173.252.90.132: icmp_seq=2 ttl=79 time=2515.415 ms
64 bytes from 173.252.90.132: icmp_seq=6 ttl=79 time=296.457 ms
64 bytes from 173.252.90.132: icmp_seq=7 ttl=79 time=410.215 ms
^C
--- facebook.com ping statistics ---
8 packets transmitted, 5 packets received, 37.5% packet loss
round-trip min/avg/max/stddev = 296.457/2119.211/4187.066/1537.275 ms

docker-compose build


以上结果表明,翻墙成功,以上结果显示翻墙的效果比较差,延迟和丢包都比较严重,可能到只构建的时候下载软件包失败。

构建完毕之后,可以检查一下是否生产了目标镜像文件,如果输出如下所示,则表明本次本地的项目集成构建成功。

$ docker images                                                                                                                            
REPOSITORY                            TAG                 IMAGE ID            CREATED             SIZE
examplevotingapp_result               latest              9bb4126b0905        5 minutes ago       225.8 MB
examplevotingapp_worker               latest              292396a5aba4        6 minutes ago       644.1 MB
examplevotingapp_vote                 latest              28052191beea        10 minutes ago      68.31 MB


在当前 node1 节点上做本地的集成结果的功能测试,用 docker-compose 启动这个项目。先检查 compose 文件,然后运行 up。

$ docker-compose config                                                                                                                      
networks: {}
services:
  db:
    image: 192.168.99.20:5000/postgres:9.4
  redis:
    image: 192.168.99.20:5000/redis:alpine
    ports:
    - '6379'
  result:
    build:
      context: /Users/martin/Documents/GitHub/example-voting-app/result
    command: nodemon --debug server.js
    ports:
    - 5001:80
    - 5858:5858
    volumes:
    - /Users/martin/Documents/GitHub/example-voting-app/result:/app:rw
  vote:
    build:
      context: /Users/martin/Documents/GitHub/example-voting-app/vote
    command: python app.py
    ports:
    - 5000:80
    volumes:
    - /Users/martin/Documents/GitHub/example-voting-app/vote:/app:rw
  worker:
    build:
      context: /Users/martin/Documents/GitHub/example-voting-app/worker
version: '2.0'
volumes: {}

$ docker-compose up                                                                                                                            
Recreating examplevotingapp_vote_1
Recreating examplevotingapp_worker_1
Starting examplevotingapp_db_1
Starting examplevotingapp_redis_1
Recreating examplevotingapp_result_1
Attaching to examplevotingapp_db_1, examplevotingapp_redis_1, examplevotingapp_worker_1, examplevotingapp_result_1, examplevotingapp_vote_1
redis_1   |                 _._
redis_1   |            _.-``__ ''-._
redis_1   |       _.-``    `.  `_.  ''-._           Redis 3.2.1 (00000000/0) 64 bit
redis_1   |   .-`` .-```.  ```\/    _.,_ ''-._
db_1      | LOG:  database system was shut down at 2016-06-20 09:58:19 UTC
redis_1   |  (    '      ,       .-`  | `,    )     Running in standalone mode
redis_1   |  |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
redis_1   |  |    `-._   `._    /     _.-'    |     PID: 1
redis_1   |   `-._    `-._  `-./  _.-'    _.-'
redis_1   |  |`-._`-._    `-.__.-'    _.-'_.-'|
redis_1   |  |    `-._`-._        _.-'_.-'    |           http://redis.io
redis_1   |   `-._    `-._`-.__.-'_.-'    _.-'
redis_1   |  |`-._`-._    `-.__.-'    _.-'_.-'|
db_1      | LOG:  MultiXact member wraparound protections are now enabled
redis_1   |  |    `-._`-._        _.-'_.-'    |
redis_1   |   `-._    `-._`-.__.-'_.-'    _.-'
db_1      | LOG:  database system is ready to accept connections
redis_1   |       `-._    `-.__.-'    _.-'
redis_1   |           `-._        _.-'
redis_1   |               `-.__.-'
redis_1   |
redis_1   | 1:M 20 Jun 10:13:36.216 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1   | 1:M 20 Jun 10:13:36.216 # Server started, Redis version 3.2.1
redis_1   | 1:M 20 Jun 10:13:36.216 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
db_1      | LOG:  autovacuum launcher started
redis_1   | 1:M 20 Jun 10:13:36.216 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1   | 1:M 20 Jun 10:13:36.216 * The server is now ready to accept connections on port 6379
vote_1    |  * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
vote_1    |  * Restarting with stat
result_1  | [nodemon] 1.9.2
result_1  | [nodemon] to restart at any time, enter `rs`
result_1  | [nodemon] watching: *.*
result_1  | [nodemon] starting `node --debug server.js`
result_1  | Debugger listening on port 5858
vote_1    |  * Debugger is active!
vote_1    |  * Debugger pin code: 139-254-286
worker_1  | Found redis at 172.19.0.2
result_1  | Mon, 20 Jun 2016 10:13:40 GMT body-parser deprecated bodyParser: use individual json/urlencoded middlewares at server.js:67:9
result_1  | Mon, 20 Jun 2016 10:13:40 GMT body-parser deprecated undefined extended: provide extended option at ../node_modules/body-parser/index.js:105:29
result_1  | App running on port 80
result_1  | Connected to db


打开浏览器测试 vote 应用。

open http://192.168.99.114:5000


正常显示结果如下图所示: voting

打开浏览器测试 result 应用。

open http://192.168.99.114:5001


正常显示结果如下图所示: result

在 Rancher 的 hosts 界面中应该看到这些运行的容器。 voting-in-ranche

至此所有关于应用构建和功能测试的过程完成,按 ctl + c 结束 docker-compose up 的运行。

^CGracefully stopping... (press Ctrl+C again to force)
Stopping examplevotingapp_worker_1 ... done
Stopping examplevotingapp_result_1 ... done
Stopping examplevotingapp_vote_1 ... done
Stopping examplevotingapp_db_1 ... done
Stopping examplevotingapp_redis_1 ... done