-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathsearchData.json
1 lines (1 loc) · 48 KB
/
searchData.json
1
[{"title":"aria2c 下载神器","url":"/2021/02/22/aria2c下载神器/","content":"\n## 安装\n\n- rew install aria2\n\n### 查看使用方法\n\n- aria2c -h\n\n### 多线程下载\n\n- aria2c -s 10\t\t使用10个线程下载,默认5个\n\n### bt种子下载(torrent)\n\n- aria2c -T 指定bt文件\n\n### 还有其他选项\n\n- 用户名密码鉴权下载等等,大家可以自己琢磨使用\n\n"},{"title":"使用operator-sdk笔记","url":"/2021/01/18/使用operator-sdk笔记/","content":"\n## brew安装operator-sdk\n\n- brew install operator-sdk\n\n![brew安装operator-sdk](使用operator-sdk笔记/brew安装operator-sdk.png)\n\n## 初始化项目\n\n1. 创建项目名\n\n 1. ```sh\n mkdir memcached-operator\n cd memcached-operator\n ```\n\n2. 新建项目\n \n 1. operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator\n\n![初始化项目](使用operator-sdk笔记/初始化项目.png)\n\n 2. 目录结构\n\n ![目录结构1](使用operator-sdk笔记/目录结构1.png)\n\n - config下获得启动配置\n\n ```sh\n .\n ├── certmanager\n ├── crd\n ├── default\n ├── manager\n ├── prometheus\n ├── rbac\n ├── samples\n ├── scorecard\n └── webhook\n ```\n\n 1. Default 包含kustomize基础,在标准配置中启动控制器\n 2. manager控制集群pod启动\n\n - main.go启动文件\n\n - PROJECT新组件的元数据\n\n## 创建扩展式API\n\n1. 创建api\n\n 1. ```sh\n operator-sdk create api --group=cache --version v1 --kind Memcached --resource=true --controller=true\n ```\n\n![创建扩展api](使用operator-sdk笔记/创建扩展api.png)\n\n## 编译推送operator镜像\n\n1. ```sh\n make docker-build docker-push IMG=<some-registry>/<project-name>:<tag>\n \n ---上面是模版\n \n make docker-build docker-push IMG=cainiaohui/memcached-operator:v0.1\n ```\n\n## 运行operator\n\n1. ```sh\n make install\n make deploy IMG=<some-registry>/<project-name>:<tag>\n ```\n\n2.创建custom resource\n\n```sh\nkubectl apply -f config/samples/cache_v1_memcached.yaml\n```\n\n3. 日志查看\n\n ```sh\n kubectl logs deployment.apps/memcached-operator-controller-manager -n memcached-operator-system -c manager\n ```\n\n4.清除CR\n\n```sh\nkubectl delete -f config/samples/cache_v1_memcached.yaml\n```\n\n5.卸载operator和CRDs\n\n```sh\nkustomize build config/default | kubectl delete -f -\n```\n\n# 详细分析\n\n## 从main函数开始\n\n1. 通过newManager封装指mgr变量中\n\n ```go\n mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{\n Scheme: scheme,\n MetricsBindAddress: metricsAddr,\n Port: 9443,\n LeaderElection: enableLeaderElection,\n LeaderElectionID: \"f1c5ece8.example.com\",\n })\n ```\n\n2. 给自定义的controller分配client参数、log和scheme,并且建立manager\n\n ```go\n if err = (&controllers.MemcachedReconciler{\n Client: mgr.GetClient(),\n Log: ctrl.Log.WithName(\"controllers\").WithName(\"Memcached\"),\n Scheme: mgr.GetScheme(),\n }).SetupWithManager(mgr); err != nil {\n setupLog.Error(err, \"unable to create controller\", \"controller\", \"Memcached\")\n os.Exit(1)\n }\n ```\n\n## 是否需要创建多个API和控制器\n\n- 如果需要创建多个API组,需要开启 multigroup=true\n\n- 开启方式两种\n\n - 修改PROJECT\n\n ```json\n domain: example.com\n layout: go.kubebuilder.io/v2\n multigroup: true\t\t//手动添加\n ```\n\n - 创建API时候定义\n\n ```sh\n operator-sdk edit --multigroup=true\n ```\n\n## 创建新的API和controller\n\n- ```console\n operator-sdk create api --group=cache1 --version=v1alpha1 --kind=Memcached1\n ```\n\n期间会提示Resource 和 Controller\n\n![自定义创建多个api2](使用operator-sdk笔记/自定义创建多个api2.png)\n\n- 开启multigroup后,原本的目录结构就改变了如下图。controller层下会多一个group文件夹、并且api也会变成apis,而且下面的目录由两层变成三层。\n\n ![开启multigroup目录结构变化1](使用operator-sdk笔记/开启multigroup目录结构变化1.png)\n\n## 定义API\n\n- 自定义CR资源的API需要填写在apis/cache1/v1alpha1/memcached1_types.go\n\n ![修改CR属性1](使用operator-sdk笔记/修改CR属性1.png)\n\n- Memcached1的结构体封装上面spec和status组成一个API值\n\n ```go\n type Memcached1 struct {\n metav1.TypeMeta `json:\",inline\"`\n metav1.ObjectMeta `json:\"metadata,omitempty\"`\n \n Spec Memcached1Spec `json:\"spec,omitempty\"`\n Status Memcached1Status `json:\"status,omitempty\"`\n }\n ```\n\n## 更新generated\n\n```sh\nmake generate\n```\n\n- makefile目标将调用[controller-gen](https://sigs.k8s.io/controller-tools)实用程序来更新`api/v1alpha1/zz_generated.deepcopy.go`文件,以确保我们API的Go类型定义实现了`runtime.Object`所有Kind类型必须实现的接口\n\n## 生成crd config文件\n\n```sh\n make manifests\n```\n\n- makefile目标将调用controller-gen生成CRD清单,位于`config/crd/bases/cache.example.com_memcacheds.yaml`\n- 期间会将OpenAPIv3模式添加到`spec.validation`块中的CRD清单中。此验证块允许Kubernetes在创建或更新Memcached自定义资源时验证其属性。\n\n![openAPIS添加验证](使用operator-sdk笔记/openAPIS添加验证.png)\n\n## 控制器监控资源代码\n\n1. Controllers/cache1/memcached1_controller.go的SetupWithManager函数构建operator。 Manager为方法库\n\n```go\nfunc (r *Memcached1Reconciler) SetupWithManager(mgr ctrl.Manager) error {\n return ctrl.NewControllerManagedBy(mgr).\n For(&cache1v1alpha1.Memcached1{}).\n WithOptions(controller.Options{ //手动添加\n MaxConcurrentReconciles: 2, //手动添加\n Reconciler: nil, //手动添加\n RateLimiter: nil, //手动添加\n Log: nil, //手动添加\n }).\n Complete(r)\n}\n```\n\n手动添加部分为设置控制器的最大并发数。\n\n2. 每个Controller都有一个Reconciler对象,该对象具有`Reconcile()`实现协调循环的方法。向协调循环传递[`Request`](https://godoc.org/github.com/kubernetes-sigs/controller-runtime/pkg/reconcile#Request)参数,该参数是命名空间/名称键,用于从缓存中查找主要资源对象Memcached:(并且用来处理rbac)\n\n```go\n// +kubebuilder:rbac:groups=cache1.example.com,resources=memcached1s,verbs=get;list;watch;create;update;patch;delete\n// +kubebuilder:rbac:groups=cache1.example.com,resources=memcached1s/status,verbs=get;update;patch\n\nfunc (r *Memcached1Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {\n _ = context.Background()\n _ = r.Log.WithValues(\"memcached1\", req.NamespacedName)\n\n // your logic here\n\n return ctrl.Result{}, nil\n}\n```\n\n更新crd\n\n```sh\n make manifests\n```\n\n构建并且运行operator\n\n```sh\n$ make install\n```\n\n构建image,推送images\n\n```sh\nexport USERNAME = <query-username>\nmake docker-build IMG=quay.io/$USERNAME/memcached-operator:v0.0.1\nmake docker-push IMG=quay.io/$USERNAME/memcached-operator:v0.0.1\n```\n\n部署operator\n\n```sh\nmake deploy IMG=quay.io/$USERNAME/memcached-operator:v0.0.1\n```\n\n清理数据两种方式\n\n1. 添加Makefile undeploy字段\n\n ```makefile\n undeploy:\n \t$KUSTOMIZE build config/default | kubectl delete -f -\n ```\n\n2. 完成安装后采用命令形式删除资源\n\n ```sh\n make undeploy\n ```\n\n## operator操作范围\n\n1. 需要在NewManager时定义namesapce\n\n```go\nmgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{\n\t\tScheme: scheme,\n\t\tMetricsBindAddress: metricsAddr,\n\t\tPort: 9443,\n\t\tLeaderElection: enableLeaderElection,\n\t\tLeaderElectionID: \"f1c5ece8.example.com\",\n\t\tNamespace:\t\t\t\"operator-namespace\",\t//手动添加\n\t})\n```\n\n2. 监控一组名称空间\n\n```go\n\tmultiNamespace := []string{\"foo\", \"bar\"} //手动添加,一组名称空间\n\n\tmgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{\n\t\tScheme: scheme,\n\t\tMetricsBindAddress: metricsAddr,\n\t\tPort: 9443,\n\t\tLeaderElection: enableLeaderElection,\n\t\tLeaderElectionID: \"f1c5ece8.example.com\",\n\t\tNamespace:\t\t\t\"operator-namespace\",\t//手动添加\t所属名称空间\n\t\tNewCache:\t\t\tcache.MultiNamespacedCacheBuilder(multiNamespace),\t//手动添加 \t监控的名称空间\n\t})\n```\n\n3. 授予operator permissions 是由config/rbac下的role.yaml 和 role_binding.yaml来决定的。\n\n 1. 如果需要更改operator的权限需要更改这两个文件\n\n4. 采用role权限来替换rolebinding,\n\n 1. 需要指定namespace如上面第一条增加newManager函数的namespace值。\n 2. 修改RBAC markers的内容(上面的内容)-启动make manifest自动修改role.yaml文件\n\n5. 使用读取环境变了的形式,捕获namespace\n\n ```go\n // getWatchNamespace returns the Namespace the operator should be watching for changes\n func getWatchNamespace() (string, error) {\t//手动添加 使用环境变量的形式控制\n \t// WatchNamespaceEnvVar is the constant for env variable WATCH_NAMESPACE\n \t// which specifies the Namespace to watch.\n \t// An empty value means the operator is running with cluster scope.\n \tvar watchNamespaceEnvVar = \"WATCH_NAMESPACE\"\n \n \tns, found := os.LookupEnv(watchNamespaceEnvVar)\n \tif !found {\n \t\treturn \"\", fmt.Errorf(\"%s must be set\", watchNamespaceEnvVar)\n \t}\n \treturn ns, nil\n }\n ```\n\n - 其次修改NewManager的namespace的部分。\n\n - 还需要修改config/manager/manager.yaml\n\n ```yaml\n spec:\n containers:\n - command:\n - /manager\n args:\n - --enable-leader-election\n image: controller:latest\n name: manager\n resources:\n limits:\n cpu: 100m\n memory: 30Mi\n requests:\n cpu: 100m\n memory: 20Mi\n # 以下都是手动添加部分\n env:\t\n - name: WATCH_NAMESPACE\n valueFrom:\n - fieldRef:\n fieldPath: metadata.namespace\n ```\n\n - 以上manager修改 `WATCH_NAMESPACE` here will always be set as the namespace where the operator is deployed.\n\n6. 添加多个namespace的方法\n\n 1. 添加辅助函数\n\n \n\n"},{"title":"mac安装java","url":"/2021/01/14/homebrew安装java笔记/","content":"\n- brew cask search java\n- brew cask info java\n\n显示:\n\n```sh\njava: 14.0.2,12:205943a0976c4ed48cb16f1043c5c647\nhttps://openjdk.java.net/\nNot installed\nFrom: https://mirrors.tuna.tsinghua.edu.cn/git/homebrew/homebrew-cask.git\n==> Name\nOpenJDK Java Development Kit\n==> Description\nNone\n==> Artifacts\njdk-14.0.2.jdk -> /Library/Java/JavaVirtualMachines/openjdk-14.0.2.jdk (Generic Artifact)\n```\n\n\n\n- brew cask install java\n\n显示:\n\n```sh\n/usr/local/Homebrew/Library/Homebrew/brew.sh: line 559: /dev/null: Interrupted system call\n==> Downloading https://download.java.net/java/GA/jdk14.0.2/205943a0976c4ed48cb1\n 1.0%\n\n```\n\n查看是否安装成功\n\n\n\n```undefined\njava -version\n```\n\njdk安装路径\n\n\n\n```undefined\n/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre\n```\n\n"},{"title":"md5sum和md5工具的使用","url":"/2021/01/07/md5工具使用/","content":"\n## md5sum工具的使用(linux)\n\n```sh\n用法:md5sum [选项]... [文件]...\n显示或检查 MD5(128-bit) 校验和。\n若没有文件选项,或者文件处为\"-\",则从标准输入读取。\n\n -b, --binary 以二进制模式读取\n -c, --check 从文件中读取MD5 的校验值并予以检查\n --tag create a BSD-style checksum\n -t, --text 以纯文本模式读取(默认)\n Note: There is no difference between binary and text mode option on GNU system.\n\nThe following four options are useful only when verifying checksums:\n --quiet don't print OK for each successfully verified file\n --status don't output anything, status code shows success\n --strict exit non-zero for improperly formatted checksum lines\n -w, --warn warn about improperly formatted checksum lines\n\n --help 显示此帮助信息并退出\n --version 显示版本信息并退出\n\nThe sums are computed as described in RFC 1321. When checking, the input\nshould be a former output of this program. The default mode is to print\na line with checksum, a character indicating input mode ('*' for binary,\nspace for text), and name for each FILE.\n\n```\n\n从上面的内容可以看到md5sum确实强大。\n\n比如我想检验yum文件的md5码\n\n```sh\nmd5sum yum\naf3eaddb82d77ebb8eaa42e27f61b2ed yum\n```\n\n\n\n## md5工具的使用(mac)\n\n- Mac作为开发者的操作系统,要是没有md5sum那就....苹果公司分装的工具更加直接就叫md5,他就是用来检验md5码的。\n\n\n- 使用md5\n\n```sh\nmd5 prometheus-2.24.0.linux-arm64.tar.gz\n\nMD5 (prometheus-2.24.0.linux-arm64.tar.gz) = a400889be94e5beae64bcbdfa0896fee\n```\n\n以上就是使用说明。。。。。\n\n```sh\nmd5 --h\nmd5: illegal option -- -\nusage: md5 [-pqrtx] [-s string] [files ...]\n```\n\n![黑人问号](md5工具使用/黑人问号.jpeg)\n\n (WC这操作说明能更简陋点吗?)"},{"title":"kubeedge源码分析(一).md","url":"/2021/01/05/kubeedge源码分析(一)/","content":"\n# 源码分析\n\n---\n\n| 组件名 | 组件功能 |\n| --------- | ---------------------- |\n| edge_mesh | 服务网格解决方案 |\n| edge_site | 边缘独立集群解决方案 |\n| mappers | 物联网协议实现包 |\n| keadm | kubeedge的一键部署工具 |\n\n| 组件名 | 代码目录 | 组件启动入口 |\n| --------- | ----------------- | ------------------------------------------------------------ |\n| cloudcore | kubeedge/cloud | kubeedge/cloud/cmd/cloudcore/cloudcore.go,kubeedge/cloud/cmd/admission/admission.go,kubeedge/cloud/cmd/csidriver/csidriver.go |\n| edgecore | kubeedge/edge | kubeedge/edge/cmd/edgecore/edgecore.go |\n| edge_mesh | kubeedge/edgemesh | kubeedge/edgemesh/cmd/edgemesh.go |\n| edge_site | kubeedge/edgesite | kubeedge/edgesite/cmd/edgesite.go |\n\n## cloudcore源码分析\n\n```go\nfunc main() {\n\tcommand := app.NewCloudCoreCommand()\t//cobra调用新建函数\n\tlogs.InitLogs()\n\tdefer logs.FlushLogs()\n\n\tif err := command.Execute(); err != nil {\n\t\tos.Exit(1)\n\t}\n}\n-------------------------- app.NewCloudCoreCommand()\nfunc NewCloudCoreCommand() *cobra.Command {\n\topts := options.NewCloudCoreOptions()\n\tcmd := &cobra.Command{\n\t\tUse: \"cloudcore\",\n\t\tLong: ...,\n\t\tRun: func(cmd *cobra.Command, args []string) {\n\t\t\t...\n\t\t\tconfig, err := opts.Config()\n ...\n\t\t\tregisterModules(config)\t//注册cloudcore的功能模块\n\t\t\t...\n\t\t\tcore.Run()\t//启动所有注册模版\n\t\t},\n\t}\n\t...\n\treturn cmd\n}\n-------------------------- registerModules(config)\n// registerModules register all the modules started in cloudcore\nfunc registerModules(c *v1alpha1.CloudCoreConfig) {\n\tcloudhub.Register(c.Modules.CloudHub, c.KubeAPIConfig)\n\tedgecontroller.Register(c.Modules.EdgeController, c.KubeAPIConfig, \"\", false)\n\tdevicecontroller.Register(c.Modules.DeviceController, c.KubeAPIConfig)\n\tsynccontroller.Register(c.Modules.SyncController, c.KubeAPIConfig)\n\tcloudstream.Register(c.Modules.CloudStream)\n}\n-------------------------- core.Run()\n// Run starts the modules and in the end does module cleanup\nfunc Run() {\n\t// Address the module registration and start the core\n\tStartModules()\n\t// monitor system signal and shutdown gracefully\n\tGracefulShutdown()\n}\n\n```\n\n- 总结上面代码,通过cobra自动启动NewCloudCoreCommand,把所有模块注册到registerModules,使用Run函数启动\n\n## edgecore源码分析\n\n```go\nfunc main() {\n\tcommand := app.NewEdgeCoreCommand()\t//cobra调用新建函数\n\tlogs.InitLogs()\n\tdefer logs.FlushLogs()\n\n\tif err := command.Execute(); err != nil {\n\t\tos.Exit(1)\n\t}\n}\n-------------------------- app.NewEdgeCoreCommand()\nfunc NewEdgeCoreCommand() *cobra.Command {\n\topts := options.NewEdgeCoreOptions()\n\tcmd := &cobra.Command{\n\t\tUse: \"edgecore\",\n\t\tLong: ...,\n\t\tRun: func(cmd *cobra.Command, args []string) {\n\t\t\t...\n\t\t\tconfig, err := opts.Config()\n\t\t\t...\n\t\t\t// Check the running environment by default\n\t\t\tcheckEnv := os.Getenv(\"CHECK_EDGECORE_ENVIRONMENT\")\n\t\t\tif checkEnv != \"false\" {\n\t\t\t\t// Check running environment before run edge core\n\t\t\t\tif err := environmentCheck(); err != nil {\n\t\t\t\t\tklog.Fatal(fmt.Errorf(\"Failed to check the running environment: %v\", err))\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// get edge node local ip\n\t\t\tif config.Modules.Edged.NodeIP == \"\" {\n\t\t\t\thostnameOverride, err := os.Hostname()\n\t\t\t\tif err != nil {\n\t\t\t\t\thostnameOverride = constants.DefaultHostnameOverride\n\t\t\t\t}\n\t\t\t\tlocalIP, _ := util.GetLocalIP(hostnameOverride)\n\t\t\t\tconfig.Modules.Edged.NodeIP = localIP\n\t\t\t}\n\n\t\t\tregisterModules(config)\t//edgecore注册模块\n\t\t\t// start all modules\n\t\t\tcore.Run()\n\t\t},\n\t}\n\t...\n\treturn cmd\n}\n-------------------------- 同理注册模块 registerModules(config)\nfunc registerModules(c *v1alpha1.EdgeCoreConfig) {\n\tdevicetwin.Register(c.Modules.DeviceTwin, c.Modules.Edged.HostnameOverride)\n\tedged.Register(c.Modules.Edged)\n\tedgehub.Register(c.Modules.EdgeHub, c.Modules.Edged.HostnameOverride)\n\teventbus.Register(c.Modules.EventBus, c.Modules.Edged.HostnameOverride)\n\tedgemesh.Register(c.Modules.EdgeMesh)\n\tmetamanager.Register(c.Modules.MetaManager)\n\tservicebus.Register(c.Modules.ServiceBus)\n\tedgestream.Register(c.Modules.EdgeStream, c.Modules.Edged.HostnameOverride, c.Modules.Edged.NodeIP)\n\ttest.Register(c.Modules.DBTest)\n\t// Note: Need to put it to the end, and wait for all models to register before executing\n\tdbm.InitDBConfig(c.DataBase.DriverName, c.DataBase.AliasName, c.DataBase.DataSource)\n}\n-------------------------- core.Run()\n// Run starts the modules and in the end does module cleanup\nfunc Run() {\n\t// Address the module registration and start the core\n\tStartModules()\n\t// monitor system signal and shutdown gracefully\n\tGracefulShutdown()\n}\n\n\n```\n\nedgemesh.Register模块已经整合到registerModules里面去了\n\n## edgesite源码分析\n\n```go\nfunc main() {\n\tcommand := app.NewEdgeSiteCommand()\t//创建\n\tlogs.InitLogs()\n\tdefer logs.FlushLogs()\n\n\tif err := command.Execute(); err != nil {\n\t\tos.Exit(1)\n\t}\n}\n\nfunc NewEdgeSiteCommand() *cobra.Command {\n\topts := options.NewEdgeSiteOptions()\n\tcmd := &cobra.Command{\n\t\tUse: \"edgesite\",\n\t\tLong: ...,\n\t\tRun: func(cmd *cobra.Command, args []string) {\n\t\t\t...\n\t\t\tregisterModules(config)\t//注册\n\t\t\t// start all modules\n\t\t\tcore.Run()\t//激活\n\t\t},\n\t}\n\t...\n\treturn cmd\n}\n\nfunc registerModules(c *v1alpha1.EdgeSiteConfig) {\n\tedged.Register(c.Modules.Edged)\n\tedgecontroller.Register(c.Modules.EdgeController, c.KubeAPIConfig, c.Modules.Edged.HostnamgieOverride, true)\n\tmetamanager.Register(c.Modules.MetaManager)\n\t// Nodte: Need to put it to the end, and wait for all models to register before executing\n\tdbm.InitDBConfig(c.DataBase.DriverName, c.DataBase.AliasName, c.DataBase.DataSource)\n}\n```\n\n## 共用框架beehive\n\n- 以下注册运行模块(edgecore/cloudcore/edgemesh)代码一致性很高,我就单分析cloudcore另外两个的逻辑也是一样的。\n\n- 看看Register函数做了些什么\n\n ```go\n func Register(hub *v1alpha1.CloudHub, kubeAPIConfig *v1alpha1.KubeAPIConfig) {\n \thubconfig.InitConfigure(hub, kubeAPIConfig)\n \tcore.Register(newCloudHub(hub.Enable))\t//使用框架进行注册操作\n }\n -------------进到Register\n type Module interface {\n \tName() string\n \tGroup() string\n \tStart()\n \tEnable() bool\n }\n \n var (\n \t// Modules map\n \tmodules map[string]Module\n \tdisabledModules map[string]Module\n )\n \n func init() {\n \tmodules = make(map[string]Module)\n \tdisabledModules = make(map[string]Module)\n }\n \n // Register register module\n func Register(m Module) {\n \tif m.Enable() {\n \t\tmodules[m.Name()] = m\n \t\tklog.Infof(\"Module %v registered successfully\", m.Name())\n \t} else {\n \t\tdisabledModules[m.Name()] = m\n \t\tklog.Warningf(\"Module %v is disabled, do not register\", m.Name())\n \t}\n }\n ```\n\n- 可以看到modules是一个map,Module是一个接口,里面有一个方法。Register的作用就是把Module接口放到全局变量modules中。\n\n- 再看看core.Run()的方法。\n\n ```go\n // Run starts the modules and in the end does module cleanup\n func Run() {\n \t// Address the module registration and start the core\n \tStartModules()\n \t// monitor system signal and shutdown gracefully\n \tGracefulShutdown()\n }\n ```\n\n- 追到StartModules\n\n ```go\n // StartModules starts modules that are registered\n func StartModules() {\n \tbeehiveContext.InitContext(beehiveContext.MsgCtxTypeChannel)\n \n \tmodules := GetModules()\n \tfor name, module := range modules {\n \t\t//Init the module\n \t\tbeehiveContext.AddModule(name)\n \t\t//Assemble typeChannels for sendToGroup\n \t\tbeehiveContext.AddModuleGroup(name, module.Group())\n \t\tgo module.Start()\n \t\tklog.Infof(\"Starting module %v\", name)\n \t}\n }\n \n // GetModules gets modules map\n func GetModules() map[string]Module {\n \treturn modules\n }\n \n ```\n\n StartModules 功能:循环遍历register的modul并加入到beehiveContext里面去。gorotine start方法启动所有module插件。\n\n ```go\n // GracefulShutdown is if it gets the special signals it does modules cleanup\n func GracefulShutdown() {\n \tc := make(chan os.Signal)\n \tsignal.Notify(c, syscall.SIGINT, syscall.SIGHUP, syscall.SIGTERM,\n \t\tsyscall.SIGQUIT, syscall.SIGILL, syscall.SIGTRAP, syscall.SIGABRT)\n \tselect {\n \tcase s := <-c:\n \t\tklog.Infof(\"Get os signal %v\", s.String())\n \t\t//Cleanup each modules\n \t\tbeehiveContext.Cancel()\n \t\tmodules := GetModules()\n \t\tfor name, _ := range modules {\n \t\t\tklog.Infof(\"Cleanup module %v\", name)\n \t\t\tbeehiveContext.Cleanup(name)\n \t\t}\n \t}\n }\n ```\n\n GracefulShutdown功能:如果设置了signals,就把该modules清除掉。"},{"title":"github图片如何显示出来","url":"/2021/01/05/github图片不能显示/","content":"\n- 对于我这个一天不登github就心慌的人来说,github图片不能显示,是一个大问题。\n\n- 主要原因dns污染。\n\n- 解决方法,配置本地hosts文件解决,中国区dns污染问题\n\n- mac电脑\n\n - 终端输入sudo vi /etc/hosts。把下面内容添加进去\n\n ```sh\n # GitHub Start\n 192.30.253.112 Build software better, together\n 192.30.253.119 gist.github.com\n 151.101.184.133 assets-cdn.github.com\n 151.101.184.133 raw.githubusercontent.com\n 151.101.184.133 gist.githubusercontent.com\n 151.101.184.133 cloud.githubusercontent.com\n 151.101.184.133 camo.githubusercontent.com\n 151.101.184.133 avatars0.githubusercontent.com\n 151.101.184.133 avatars1.githubusercontent.com\n 151.101.184.133 avatars2.githubusercontent.com\n 151.101.184.133 avatars3.githubusercontent.com\n 151.101.184.133 avatars4.githubusercontent.com\n 151.101.184.133 avatars5.githubusercontent.com\n 151.101.184.133 avatars6.githubusercontent.com\n 151.101.184.133 avatars7.githubusercontent.com\n 151.101.184.133 avatars8.githubusercontent.com\n # GitHub End\n ```\n\n - 保存退出,登陆github刷新就可以显示图片了\n\n"},{"title":"kubeedge 核心架构组件详解","url":"/2021/01/05/kubeedge笔记详解/","content":"\n## kubeedge分为两个可执行程序(cloudcore/edgecore)- 8个组件\n\n### cloudcore:\n 1. CloudHub:云中的通信接口模块。\n 2. EdgeController:管理Edge节点。\n 3. devicecontroller 负责设备管理。\n\n### edgecore:\n 1. Edged:在边缘管理容器化的应用程序。\n 2. EdgeHub:Edge上的通信接口模块。\n 3. EventBus:使用MQTT处理内部边缘通信。\n 4. DeviceTwin:它是用于处理设备元数据的设备的软件镜像。\n 5. MetaManager:它管理边缘节点上的元数据。\n\n - Edged详解:\n 1. 和kubelet的功能相似。从metamanager接收和处理pod\n 2. 保留config map和secrets的缓存\n 3. 其他:\n 1. CRI边缘化\n 2. container/images GC\n 3. volume管理\n\n### 各模块详解\n - eventbus\n 1. 主要用来发送接收mqtt的消息接口(如蓝牙设备等等)\n 2. 三种模式 internalMqttMode/externalMqttMode/bothMqttMode\n \n - metamanager\n 1. MetaManager是edged和edgehub之间的消息处理器。它还负责将元数据存储到轻量级数据库(SQLite)或从中检索元数据。\n 2. 因为连接SQLite,所以能进行CRUD操作\n \n - Edgehub\n 1. Edge Hub使用Web socket或QUIC协议和CloudHub组件进行交互。同步云端更新和报告边缘端主机状态\n \n - DeviceTwin\n 1. 负责存储设备状态,处理设备属性,处理设备孪生操作,在边缘设备和边缘节点之间创建成员资格,将设备状态同步到云以及在边缘和云之间同步设备孪生信息。它还为应用程序提供查询接口。\n 2. 由4个子模块组成:membership,communication,device和device twin\n\n---\n 以下为云上组件\n ---\n\n---\n\n - Edge Controller\n 1. EdgeController是Kubernetes Api服务器和Edgecore之间的桥梁\n - CloudHub\n 1. CloudHub是cloudcore的一个模块,是Controller和Edge端之间的中介。它同时支持基于Web套接字的连接以及QUIC协议访问。\n 2. 功能:启用边缘与控制器之间的通信\n- Device Controller\n 1. k8s CRD来描述设备metadata/status ,devicecontroller在云和边缘之间同步,有两个goroutines: `upstream controller`/downstream controller\n\n\n\n\n\n\n\n"},{"title":"dockerfile","url":"/2021/01/04/dockerfile/","content":"\n## dockerfile构建镜像\n```dockerfile\nFROM alpine:latest\n\nADD etcd /usr/local/bin/\nADD etcdctl /usr/local/bin/\nRUN mkdir -p /var/etcd/\nRUN mkdir -p /var/lib/etcd/\n\n# Alpine Linux doesn't use pam, which means that there is no /etc/nsswitch.conf,\n# but Golang relies on /etc/nsswitch.conf to check the order of DNS resolving\n# (see https://github.com/golang/go/commit/9dee7771f561cf6aee081c0af6658cc81fac3918)\n# To fix this we just create /etc/nsswitch.conf and add the following line:\nRUN echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf\n\nEXPOSE 2379 2380\n\n# Define default command.\nCMD [\"/usr/local/bin/etcd\"]\n```\n\n## 构建镜像\n\n- docker build -t etcd .\n\n\n\n"},{"title":"docker启动etcd","url":"/2021/01/03/docker启动etcd/","content":"\n## 启动服务\n\ndocker run \\\n-p 2379:2379 \\\n-p 2380:2380 \\\n--name etcd-gcr-v3.4.0 \\\nquay.io/coreos/etcd:v3.4.0 \\\n/usr/local/bin/etcd \\\n--name s1 \\\n--data-dir /etcd-data \\\n--listen-client-urls http://0.0.0.0:2379 \\\n--advertise-client-urls http://0.0.0.0:2379 \\\n--listen-peer-urls http://0.0.0.0:2380 \\\n--initial-advertise-peer-urls http://0.0.0.0:2380 \\\n--initial-cluster s1=http://0.0.0.0:2380 \\\n--initial-cluster-token tkn \\\n--initial-cluster-state new \\\n--log-level info \\\n--logger zap \\\n--log-outputs stderr\n\n- 我遇到的坑\n\n默认启动都是localhost,结果外部访问不能访问\n"},{"title":"go-zero脚手架搭建微服务笔记","url":"/2021/01/03/go-zero微服务搭建笔记/","content":"\n## 准备环境\n\n- 安装etcd mysql redis\n\n我都是放在docker里的,如下图所示\n![docker启动状况](go-zero微服务搭建笔记/docker准备容器.png)\n\n\n- 安装protoc-gen-go 和 goctl工具\n\ngo get -u github.com/golang/protobuf/protoc-gen-go\ngo get -u github.com/tal-tech/go-zero/tools/goctl\n\n## 生成目录\n\n- goctl api -o bookstore.api\n```\nDone.\n```\n显示上面提示说明成功生成\n\n![生成api文件](go-zero微服务搭建笔记/自动生成的api文件.png)\n\n编写api文档\n```\ntype (\n addReq {\n book string `form:\"book\"`\n price int64 `form:\"price\"`\n }\n \n addResp {\n ok bool `json:\"ok\"`\n }\n)\n\ntype (\n checkReq {\n book string `form:\"book\"`\n }\n \n checkResp {\n found bool `json:\"found\"`\n price int64 `json:\"price\"`\n }\n)\n\nservice bookstore-api {\n @handler AddHandler\n get /add (addReq) returns (addResp)\n \n @handler CheckHandler\n get /check (checkReq) returns (checkResp)\n}\n```\n编写完上面内容,启动生成命令\n\n- goctl api go -api bookstore.api -dir .\n\n![生成配置文件](go-zero微服务搭建笔记/自动生成的api文件.png)\n\n## 启动测试服务\n\n- go run bookstore.go -f etc/bookstore-api.yaml\n\n![apiserver启动](go-zero微服务搭建笔记/apiserver启动.png)\n\n访问结果\n```\ncurl -i \"http://localhost:8888/check?book=go-zero\"\nHTTP/1.1 200 OK\nContent-Type: application/json\nDate: Sun, 03 Jan 2021 07:46:30 GMT\nContent-Length: 25\n{\"found\":false,\"price\":0}\n```\n\n## 编写rpc服务(ADD服务)\n\n- 创建rpc目录,进入目录\n- goctl rpc template -o add.proto 生成模版\n\n在文件夹中编写add.proto\n\n```\nsyntax = \"proto3\";\n\npackage add;\n\nmessage addReq {\n string book = 1;\n int64 price = 2;\n}\n\nmessage addResp {\n bool ok = 1;\n}\n\nservice adder {\n rpc add(addReq) returns(addResp);\n}\n```\n\n- goctl rpc proto -src add.proto -dir . 生成rpc服务\n\n![rpc生成的目录结构](/go-zero微服务搭建笔记/rpc生成的目录结构.png)\n\n- 运行服务 go run add.go -f etc/add.yaml\n\n此处会去连接etcd的端口(如果没有etcd的服务就会在这里报错),具体配置文件在rpc/etc/add.yaml\n\n## 编写rpc服务(CHECK服务)同上\n\n## 配置 api server\n\n### bookstore-api.yaml把rpc服务写入\n\n```\nAdd:\n Etcd:\n Hosts:\n - localhost:2379\n Key: add.rpc\nCheck:\n Etcd:\n Hosts:\n - localhost:2379\n Key: check.rpc\n```\n\n### 修改internal/config/config.go如下,增加add/check服务依赖\n\n```\n\ntype Config struct {\n rest.RestConf\n Add zrpc.RpcClientConf // 手动代码\n Check zrpc.RpcClientConf // 手动代码\n}\n\n```\n\n### 修改internal/svc/servicecontext.go\n\n```\ntype ServiceContext struct {\n Config config.Config\n Adder adder.Adder // 手动代码\n Checker checker.Checker // 手动代码\n}\n\nfunc NewServiceContext(c config.Config) *ServiceContext {\n return &ServiceContext{\n Config: c,\n Adder: adder.NewAdder(zrpc.MustNewClient(c.Add)), // 手动代码\n Checker: checker.NewChecker(zrpc.MustNewClient(c.Check)), // 手动代码\n }\n}\n```\n\n### 修改internal/logic/addlogic.go里的Add\n```\nfunc (l *AddLogic) Add(req types.AddReq) (*types.AddResp, error) {\n // 手动代码开始\n resp, err := l.svcCtx.Adder.Add(l.ctx, &adder.AddReq{\n Book: req.Book,\n Price: req.Price,\n })\n if err != nil {\n return nil, err\n }\n\n return &types.AddResp{\n Ok: resp.Ok,\n }, nil\n // 手动代码结束\n}\n```\n\n### 同理修改internal/logic/checklogic.go里的Check\n```\nfunc (l *CheckLogic) Check(req types.CheckReq) (*types.CheckResp, error) {\n // 手动代码开始\n resp, err := l.svcCtx.Checker.Check(l.ctx, &checker.CheckReq{\n Book: req.Book,\n })\n if err != nil {\n logx.Error(err)\n return &types.CheckResp{}, err\n }\n\n return &types.CheckResp{\n Found: resp.Found,\n Price: resp.Price,\n }, nil\n // 手动代码结束\n}\n```\n\n## 定义数据库表结构\n\n```sql\nCREATE TABLE `book`\n(\n `book` varchar(255) NOT NULL COMMENT 'book name',\n `price` int NOT NULL COMMENT 'book price',\n PRIMARY KEY(`book`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;\n```\n\n- 连接上mysql数据库,创建gozero数据库\n\n![创建数据库](go-zero微服务搭建笔记/创建数据库.png)\n\n- idea连接数据库 参考连接:https://juejin.cn/post/6844904036802494477\n\n![idea连接mysql数据库配置](go-zero微服务搭建笔记/idea连接mysql数据库配置.png)\n\n- 使用idea命令执行sql指令。\n\n![sql命令控制mysql](go-zero微服务搭建笔记/sql命令控制mysql.png)\n\n### 生成redis cache\n\n```sh\ngoctl model mysql ddl -c -src book.sql -dir .\n```\n\n## 修改rpc代码调用crud cache\n\n- 在roc/add/etc/add.yaml和roc/check/etc/check.yaml加入下面的代码\n\n```yaml\nDataSource: root:@tcp(localhost:3306)/gozero\nTable: book\nCache:\n - Host: localhost:6379\n```\n\n- 增加了mysql和redis cache配置\n\n- 修改`rpc/add/internal/config.go`和`rpc/check/internal/config.go`,如下:\n\n ```go\n type Config struct {\n zrpc.RpcServerConf\n DataSource string // 手动代码\n Cache cache.CacheConf // 手动代码\n }\n ```\n\n\n\n修改`rpc/add/internal/svc/servicecontext.go`和`rpc/check/internal/svc/servicecontext.go`,如下:\n\n```go\ntype ServiceContext struct {\n c config.Config\n Model model.BookModel // 手动代码\n}\n\nfunc NewServiceContext(c config.Config) *ServiceContext {\n return &ServiceContext{\n c: c,\n Model: model.NewBookModel(sqlx.NewMysql(c.DataSource), c.Cache), // 手动代码\n }\n}\n```\n\n修改`rpc/add/internal/logic/addlogic.go`,如下(逻辑代码)\n\n```go\nfunc (l *AddLogic) Add(in *add.AddReq) (*add.AddResp, error) {\n // 手动代码开始\n _, err := l.svcCtx.Model.Insert(model.Book{\n Book: in.Book,\n Price: in.Price,\n })\n if err != nil {\n return nil, err\n }\n\n return &add.AddResp{\n Ok: true,\n }, nil\n // 手动代码结束\n}\n```\n\n修改`rpc/check/internal/logic/checklogic.go`,如下:\n\n```go\nfunc (l *CheckLogic) Check(in *check.CheckReq) (*check.CheckResp, error) {\n // 手动代码开始\n resp, err := l.svcCtx.Model.FindOne(in.Book)\n if err != nil {\n return nil,err\n }\n\n return &check.CheckResp{\n Found: true,\n Price: resp.Price,\n }, nil\n // 手动代码结束\n}\n```\n\n- 调用演示\n\n 1. 启动所有rpc服务\n 2. 启动api服务\n 3. 访问api服务\n\n ```sh\n curl -i \"http://localhost:8888/add?book=go-zero&price=10\"\n ```\n\n 如下图显示运行\n\n ![api访问正常1](go-zero微服务搭建笔记/api访问正常1.png)\n\n 此时对于的rpc服务和api日志都会有响应\n\n- 经过测试得,添加请求不会存入redis,读取请求会写入redis中。\n\n## benchmark抗压测试\n\n- 首先调整mysql的句柄数 \n\n```sh\nulimit -n 100000\n```\n\n- 使用wrk进行抗压测试\n\n```sh\nwrk -t10 -c1000 -d40s --latency \"http://localhost:8888/check?book=go-zero\"\n```\n\n测试结果图如下:关于wrk参考链接(https://www.cnblogs.com/xinzhao/p/6233009.html)\n\n![抗压测试结果](go-zero微服务搭建笔记/抗压测试结果.png)\n\n"},{"title":"markdown笔记","url":"/2020/12/30/markdown笔记/","content":"\n## 插图三种方式\n\n```\n基础格式:\n![Alt text](图片链接 \"optional title\")\n\n方法一:插入本地图片\n![avatar](/home/picture/1.png)\n\n方法二:插入网络图片\n![avatar](http://baidu.com/pic/doge.png)\n\n方法三:把图片存入markdown文件\n![avatar][base64str]\n[base64str]:data:image/png;base64,iVBORw0......\n\n```\n\n"},{"title":"基础命令日常总结","url":"/2020/12/29/linux基础命令/","content":"\n## netstat -apn | grep 8080 或者 lsof -i:8080\n\n 根据端口查PID\n\n## netstat -apn | grep 21299\n\n 根据PID查端口\n\n## kill -9 PID\n\n 杀死PID对应的端口\n\n## nohup ./main > /dev/null 2>&1 &\n\n 后端启动main服务, 并且返回PID号\n\n```sh\n例如:\n参考链接: https://blog.csdn.net/m0_46657040/article/details/109611803\n[root@k8s-master-81 harmoryedge]# nohup ./main > /dev/null 2>&1 &\n[1] 21299\n```\n## cd /proc/21299 && ll\n\n 通过PID号查询服务路径等信息\n\n## npm run start \n\n 启动前端start程序(react或者vue)"},{"title":"YOU-GET笔记","url":"/2020/12/25/you-get笔记/","content":"\n- 参考连接 https://github.com/soimort/you-get\n\n- 安装: brew install you-get\n\n- 使用\n - you-get 'https://www.youtube.com/watch?v=jNQXAC9IVRw'\n\n- 查看详细信息\n - you-get -i 'https://www.youtube.com/watch?v=jNQXAC9IVRw'\n\n"},{"title":"go测试章节","url":"/2020/12/24/gotest测试工具笔记/","content":"## gotest文本如何书写\n\n```go\n//表格驱动测试\nfunc TestXXX(t * testint.T){\n // 定义输入输出\n tests := []struct{\n in int\n out int\n }\n}{\n // 测试数据\n {1, 1},\n {2, 2},\n ...\n}\nfor _, tt := range tests {\n //通过函数执行测试用例\n actual := 需要测试的函数名(tt.in) \n if actual != tt.out {\n //输出不匹配的信息\n //errof输出\n t.Errof(t.Errorf(\"got %d for input %s; expected %d\", actual, tt.in, tt.out))\n //Skipf输出\n t.Skipf(t.Errorf(\"got %d for input %s; expected %d\", actual, tt.in, tt.out))\n //logf输出\n t.Logf(t.Errorf(\"got %d for input %s; expected %d\", actual, tt.in, tt.out))\n }\n}\n```\n- 表格驱动测试语句(后面通过正则匹配)\n - go test -v -timeout 30s . -run ^TestXXX$\n\n# go test 和go tool 性能测试\n(具体可以通过go tool cover 查询具体命令)\n\n- go test -coverprofile=cover.out\n\n输出cpu覆盖率\n\n- go tool cover -html=cover.out\n\nhtml显示cpu数据\n\n- go test -bench xxx.go\n \n 目标文件bench性能测试,看花的时间\n\n\n# go pprof测试\n\n- go test help\n\n help提示信息\n\n- go test -bench nonrepeatingsubstr -cpuprofile cpu.out \n\n 生成目标文件的cpu使用情况\n\n- go tool pprof cpu.out\n - help\n - web\n pprof交互式显示\n\n# godoc 文档\n\n- godoc --help\n godoc使用文档\n\n- godoc -http :6060\n 服务器形式打开go参考手册\n\n\n\n\n\n\n"},{"title":"初看组件 7大组件+1个运行时(master 5 + node 2)","url":"/2020/12/18/kubernetes组件详细笔记/","content":"\n### kube-apiserver(1)\n\nAPI 服务器是 Kubernetes 控制面的组件, 该组件公开了 Kubernetes API。 API 服务器是 Kubernetes 控制面的前端。\n\nKubernetes API 服务器的主要实现是 kube-apiserver。 kube-apiserver 设计上考虑了水平伸缩,也就是说,它可通过部署多个实例进行伸缩。 你可以运行 kube-apiserver 的多个实例,并在这些实例之间平衡流量。\n\n### etcd (2)\n\netcd 是兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。\n\n您的 Kubernetes 集群的 etcd 数据库通常需要有个备份计划。\n\n要了解 etcd 更深层次的信息,请参考 etcd 文档。\n\n### kube-scheduler(3)\n主节点上的组件,该组件监视那些新创建的未指定运行节点的 Pod,并选择节点让 Pod 在上面运行。\n\n调度决策考虑的因素包括单个 Pod 和 Pod 集合的资源需求、硬件/软件/策略约束、亲和性和反亲和性规范、数据位置、工作负载间的干扰和最后时限。\n\n### kube-controller-manager(4)\n在主节点上运行 控制器 的组件。\n\n从逻辑上讲,每个控制器都是一个单独的进程, 但是为了降低复杂性,它们都被编译到同一个可执行文件,并在一个进程中运行。\n\n这些控制器包括:\n\n节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应。\n副本控制器(Replication Controller): 负责为系统中的每个副本控制器对象维护正确数量的 Pod。\n端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod)。\n服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌.\n\n### cloud-controller-manager(5)\n云控制器管理器是指嵌入特定云的控制逻辑的 控制平面组件。 云控制器管理器允许您链接聚合到云提供商的应用编程接口中, 并分离出相互作用的组件与您的集群交互的组件。\ncloud-controller-manager 仅运行特定于云平台的控制回路。 如果你在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的环境中不需要云控制器管理器。\n\n与 kube-controller-manager 类似,cloud-controller-manager 将若干逻辑上独立的 控制回路组合到同一个可执行文件中,供你以同一进程的方式运行。 你可以对其执行水平扩容(运行不止一个副本)以提升性能或者增强容错能力。\n\n下面的控制器都包含对云平台驱动的依赖:\n\n节点控制器(Node Controller): 用于在节点终止响应后检查云提供商以确定节点是否已被删除\n路由控制器(Route Controller): 用于在底层云基础架构中设置路由\n服务控制器(Service Controller): 用于创建、更新和删除云提供商负载均衡器\n\n## Node 组件 \n--- \n\n### kubelet(6)\n一个在集群中每个节点上运行的代理。 它保证容器都运行在 Pod 中。\n\nkubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器。\n\n### kube-proxy (7)\nkube-proxy 是集群中每个节点上运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。\n\nkube-proxy 维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。\n\n如果操作系统提供了数据包过滤层并可用的话,kube-proxy 会通过它来实现网络规则。否则, kube-proxy 仅转发流量本身。\n\n### 容器运行时(Container Runtime)(8)\n容器运行环境是负责运行容器的软件。\n\nKubernetes 支持多个容器运行环境: Docker、 containerd、CRI-O 以及任何实现 Kubernetes CRI (容器运行环境接口)。\n\n---\n## 插件(Addons)\n---\n\n### cattle\n有集群 DNS \n\n### Dashboard\nweb 界面\n\n### prometheus\n容器资源监控\n\n### EFK\n日志监控\n\n---\n\n\n# node节点开始\n\n- 节点上的组件包括 kubelet、 容器运行时以及 kube-proxy。\n- 节点于api服务器交互,通过节点上kubectl自注册入集群\n- 子注册参数: 节点生成完成后通过kubeadm join注册\n```json\n\"conditions\": [\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\",\n \"lastHeartbeatTime\": \"2019-06-05T18:38:35Z\",\n \"lastTransitionTime\": \"2019-06-05T11:41:27Z\"\n }\n]\n```\n- Ready 条件处于 Unknown 或者 False 状态的时间超过了 pod-eviction-timeout, 默认是5分钟,就会被驱逐。\n"},{"title":"连接kubernetes","url":"/2020/12/10/kubernetesAPI调用/","content":"\n ## client-go \n \n 通过client-go获取kubeconfig访问集群\n\n 参考文献:\n https://blog.csdn.net/qq_37950254/article/details/89603207\n\n https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go\n\n https://kubernetes.io/zh/docs/tasks/administer-cluster/access-cluster-api/\n\n https://my.oschina.net/u/4382516/blog/3303251\n\n\n普通调用的方法:\n```sh\n$ kubectl create sa my-sa \n$ kubect1 create clusterrolebinding my-clusterrolebinding --clusterrole-cluster-admin --serviceaccount=default:my-sa \n$ export TOKEN='kubect1 get secret s(kubect1 get secret | grep my-sa | awk '(print $l}') -ojsonpath=(.data.token} I base64 -d\n```\n\n\n## k8s开启http端口\n\n- 访问apiserver的http8080端口需要开启,apiserver的pod的不安全服务端口\n - vim /etc/kubernetes/manifests/kube-apiserver.yaml \n - 修改 --insecure-port=8080\n\n## 定义一个pod最起码的配置\n\n```yaml\napiVersion: v1\nkind: Pod (可填 Deployment、Job、Ingress、Service)\nmetadata:\n name: pod1\n namespace: namespace1\n labels:\n mycustome.pod.label: customePodLabel\n spec:\n containers:\n - name: container1\n image: xxxdocker镜像\n imagePullPolice: IfNotPresent\n command: 【开启后执行的第一句脚本语言】.sh\n workingDir: xxx路径[创建工作的内容会这个docker路径下]\n volumeMounts:\n - name: 挂载名称1\n mountPath: 本地路径\n port: \n - name: portname1\n hostport: 本地端口\n env: \n - name: envname1\n value: 环境变量值\n resource:\n limits:\n cpu: 250m\n memory: 100kb \n secret:\n secretName: secretName1\n items:\n - key: k1\n path: 容器secret路径\n configMap:\n name: CM1\n items:\n - key: cm1k1\n path: 容器configMap路径\n```\n- 综上得,定义一个pod需要\n 1. apiversion/\n 2. kind/\n 3. metadata/\n 1. podname/\n 2. namespaces/\n 3. labels/\n 1. containername/\n 2. containerimages/ imagePullPolice/\n 3. workingdir/\n 4. volumeMounts/\n 1. name/\n 2. mountPath/ \n 5. port/\n 1. name/\n 2. hostport/\n 6. env/\n 1. name\n 2. value\n 7. resource/\n 1. limits/\n 1. cpu/\n 2. momory/\n 8. secret/\n- 思路定义pod—>conatiner->image/port/volume/env/resource\n\n## 定义一个deployment最起码的配置\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nnamespaces: my-custome-namespaces\nmetadata:\n name: 管理指定pod的deployment\nspec:\n replicas: 数量\n selector:\n matchLabels: //通过label选择pod\n mycustome.pod.label: customePodLabel //label选择器\n template: //模版\n metadata: \n labels:\n mycustome.pod.label: customePodLabel //选择该label的pod\n spec:\n containers: //期望创建的容器\n - name: nginx\n image: nginx:1.10\n ports:\n - containerPort: 80\n```\n\n- kubectl edit deploy/custome-deployment-nginx 修改deployment\n- kubectl get deployment --show-labels 展示label\n- kubectl rollout status deploy/custome-deployment-nginx 查看发布状态\n- kubectl rollout history deploy/custome-deployment-nginx 查看历史状态\n- kubectl rollout undo deploy/nginx-deployment --to-revision=1 回滚到指定版本\n- kubectl set image deploy/custome-deployment-nginx nginx=nginx:1.11 更新镜像\n- kubectl scale --replicas=10 deployment/custome-deployment-nginx 扩容\n\n## 定义一个service最起码的配置\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: nginx-service\n labels:\n myCustomePodLabel: customePodLabel\nspec:\n ports:\n - port: 88\n targetPort: 80\n selector:\n myCustomePodLabel: customePodLabel\n```\n- \n\n- kubectl edit svc/nginx-service 修改svc配置\n\n\n## k8s调度器,预选策略和优选函数(https://www.cnblogs.com/klvchen/p/10024846.html)\n\n- 通过手动去实现\n\n- 需要给指定的node搭上指定公司的label标签,说明属于哪个公司\n```sh\nkubectl label nodes k8s-master02 type=company02\n```\n\n- 把pod或者deployment以yaml方式输出\n```sh\nkubectl get pod zeus-86784767b5-j7hqh -o=yaml\n```\n\n## pod\n\n- 状态\n 三种调度状态:Waiting(等待)、Running(运行中)和 Terminated(已终止)\n\n- 探针的类型\n ExecAction(命令执行)、TCPSocketAction、 HTTPGetAction\n\n- 两种探针\n 存活探针、就绪探针\n\n- pause容器功能\n 1. 它提供整个pod的Linux命名空间的基础。\n 2. 启用PID命名空间,它在每个pod中都作为PID为1进程,并回收僵尸进程\n\n\n\n\n"}]