Tumgik
#data-node2
computingpostcom · 1 year
Text
MinIO is a high-performance S3 compliant distributed object storage. It is the only 100% open-source storage tool available on every public and private cloud, Kubernetes distribution, and the edge. The MinIO storage system is able to run on minimal CPU and memory resources as well as give maximum performance. The MinIO storage is dominant in traditional object storage with several use cases like secondary storage, archiving, and data recovery. One of the main features that make it suitable for this use is its ability to overcome challenges associated with machine learning, native cloud applications workloads, and analytics. Other amazing features associated with MinIO are: Identity Management– it supports most advanced standards in identity management, with the ability to integrate with the OpenID connect compatible providers as well as key external IDP vendors. Monitoring – It offers detailed performance analysis with metrics and per-operation logging. Encryption – It supports multiple, sophisticated server-side encryption schemes to protect data ensuring integrity, confidentiality, and authenticity with a negligible performance overhead High performance – it is the fastest object storage with the GET/PUT throughput of 325 and 165 GiB/sec respectively on just 32 nodes of NVMe. Architecture – MinIO is cloud native and light weight and can also run as containers managed by external orchestration services such as Kubernetes. It is efficient to run on low CPU and memory resources and therefore allowing one to co-host a large number of tenants on shared hardware. Data life cycle management and Tiering – this protects the data within and accross both public and private clouds. Continuous Replication – It is designed for large scale, cross data center deployments thus curbing the challenge with traditional replication approaches that do not scale effectively beyond a few hundred TiB By following this guide, you should be able to deploy and manage MinIO Storage clusters on Kubernetes. This guide requires one to have a Kubernetes cluster set up. Below are dedicated guides to help you set up a Kubernetes cluster. Install Kubernetes Cluster on Ubuntu with kubeadm Deploy Kubernetes Cluster on Linux With k0s Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Run Kubernetes on Debian with Minikube Install Kubernetes Cluster on Ubuntu using K3s For this guide, I have configured 3 worker nodes and a single control plane in my cluster. # kubectl get nodes NAME STATUS ROLES AGE VERSION master1 Ready control-plane 3m v1.23.1+k0s node1 Ready 60s v1.23.1+k0s node2 Ready 60s v1.23.1+k0s node3 Ready 60s v1.23.1+k0s Step 1 – Create a StorageClass with WaitForFirstConsumer Binding Mode. The WaitForFirstConsumer Binding Mode will be used to assign the volumeBindingMode to a persistent volume. Create the storage class as below. vim storageClass.yml In the file, add the below lines. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: my-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer Create the pod. $ kubectl create -f storageClass.yml storageclass.storage.k8s.io/my-local-storage created Step 2 – Create Local Persistent Volume. For this guide, we will create persistent volume on the local machines(nodes) using the storage class above. The persistent volume will be created as below: vim minio-pv.yml Add the lines below to the file apiVersion: v1 kind: PersistentVolume metadata: name: my-local-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: my-local-storage local: path: /mnt/disk/vol1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname
operator: In values: - node1 Here I have created a persistent volume on node1. Go to node1 and create the volume as below. DIRNAME="vol1" sudo mkdir -p /mnt/disk/$DIRNAME sudo chcon -Rt svirt_sandbox_file_t /mnt/disk/$DIRNAME sudo chmod 777 /mnt/disk/$DIRNAME Now on the master node, create the pod as below. # kubectl create -f minio-pv.yml Step 3 – Create a Persistent Volume Claim Now we will create a persistent volume claim and reference it to the created storageClass. vim minio-pvc.yml The file will contain the below information. apiVersion: v1 kind: PersistentVolumeClaim metadata: # This name uniquely identifies the PVC. This is used in deployment. name: minio-pvc-claim spec: # Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes storageClassName: my-local-storage accessModes: # The volume is mounted as read-write by Multiple nodes - ReadWriteMany resources: # This is the request for storage. Should be available in the cluster. requests: storage: 10Gi Create the PVC. kubectl create -f minio-pvc.yml At this point, the PV should be available as below: # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE my-local-pv 4Gi RWX Retain Available my-local-storage 96s Step 4 – Create the MinIO Pod. This is the main deployment, we will use the Minio Image and PVC created. Create the file as below: vim Minio-Dep.yml The file will have the below content: apiVersion: apps/v1 kind: Deployment metadata: # This name uniquely identifies the Deployment name: minio spec: selector: matchLabels: app: minio # has to match .spec.template.metadata.labels strategy: # Specifies the strategy used to replace old Pods by new ones # Refer: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy type: Recreate template: metadata: labels: # This label is used as a selector in Service definition app: minio spec: # Volumes used by this deployment volumes: - name: data # This volume is based on PVC persistentVolumeClaim: # Name of the PVC created earlier claimName: minio-pvc-claim containers: - name: minio # Volume mounts for this container volumeMounts: # Volume 'data' is mounted to path '/data' - name: data mountPath: /data # Pulls the latest Minio image from Docker Hub image: minio/minio args: - server - /data env: # MinIO access key and secret key - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000 # Readiness probe detects situations when MinIO server instance # is not ready to accept traffic. Kubernetes doesn't forward # traffic to the pod while readiness checks fail. readinessProbe: httpGet: path: /minio/health/ready port: 9000 initialDelaySeconds: 120 periodSeconds: 20 # Liveness probe detects situations where MinIO server instance # is not working properly and needs restart. Kubernetes automatically # restarts the pods if liveness checks fail. livenessProbe: httpGet: path: /minio/health/live port: 9000 initialDelaySeconds: 120 periodSeconds: 20 Apply the configuration file. kubectl create -f Minio-Dep.yml Verify if the pod is running: # kubectl get pods NAME READY STATUS RESTARTS AGE minio-7b555749d4-cdj47 1/1 Running 0 22s
Furthermore, the PV should be bound at this moment. # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE my-local-pv 4Gi RWX Retain Bound default/minio-pvc-claim my-local-storage 4m42s Step 5 – Deploy the MinIO Service We will create a service to expose port 9000. The service can be deployed as NodePort, ClusterIP, or load balancer. Create the service file as below: vim Minio-svc.yml Add the lines below to the file. apiVersion: v1 kind: Service metadata: # This name uniquely identifies the service name: minio-service spec: type: LoadBalancer ports: - name: http port: 9000 targetPort: 9000 protocol: TCP selector: # Looks for labels `app:minio` in the namespace and applies the spec app: minio Apply the settings: kubectl create -f Minio-svc.yml Verify if the service is running: # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 15m minio-service LoadBalancer 10.103.101.128 9000:32278/TCP 27s Step 6 – Access the MinIO Web UI. At this point, the MinIO service has been exposed on port 32278, proceed and access the web UI using the URL http://Node_IP:32278 Ente the set MinIO access and secret key to log in. On successful authentication, you should see the MinIO web console as below. Create a bucket say test bucket. Upload files to the created bucket. The uploaded file will appear in the bucket as below. You can as well set the bucket policy. Step 7 – Manage MinIO using MC client MinIO Client is a tool used to manage the MinIO Server by providing UNIX commands such as ls, rm, cat, mv, mirror, cp e.t.c. The MinIO Client is installed using binaries as below. ##For amd64 wget https://dl.min.io/client/mc/release/linux-amd64/mc ##For ppc64 wget https://dl.min.io/client/mc/release/linux-ppc64le/mc Move the file to your path and make it executable: sudo cp mc /usr/local/bin/ sudo chmod +x /usr/local/bin/mc Verify the installation. $ mc --version mc version RELEASE.2022-02-16T05-54-01Z Once installed, connect to the MinIO server with the syntax. mc alias set [YOUR-ACCESS-KEY] [YOUR-SECRET-KEY] [--api API-SIGNATURE] For this guide, the command will be: mc alias set minio http://192.168.205.11:32278 minio minio123 --api S3v4 Sample Output: Remember to specify the right port for the MinIO server. You can use the IP_address of any node on the cluster. Once connected, list all the buckets using the command: mc ls play minio Sample Output: You can list files in a bucket say test bucket with the command: $ mc ls play minio/test [2022-03-16 04:07:15 EDT] 0B 00000qweqwe/ [2022-03-16 05:31:53 EDT] 0B 000tut/ [2022-03-18 07:50:35 EDT] 0B 001267test/ [2022-03-16 21:03:34 EDT] 0B 3f66b017508b449781b927e876bbf640/ [2022-03-16 03:20:13 EDT] 0B 6210d9e5011632646d9b2abb/ [2022-03-16 07:05:02 EDT] 0B 622f997eb0a7c5ce72f6d199/ [2022-03-17 08:46:05 EDT] 0B 85x8nbntobfws58ue03fam8o5cowbfd3/ [2022-03-16 14:59:37 EDT] 0B 8b437f27dbac021c07d9af47b0b58290/ [2022-03-17 21:29:33 EDT] 0B abc/ ..... [2022-03-16 11:55:55 EDT] 0B zips/ [2022-03-17 11:05:01 EDT] 0B zips202203/ [2022-03-18 09:18:36 EDT] 262KiB STANDARD Install cPanel|WHM on AlmaLinux with Let's Encrypt 7.png Create a new bucket using the syntax: mc mb minio/ For example, creating a bucket with the name testbucket1 $ mc mb minio/testbucket1 Bucket created successfully `minio/testbucket1`. The bucket will be available in the console. In case you need help when using the MinIO client, get help using the command: $ mc --help NAME: mc - MinIO Client for cloud storage and filesystems. USAGE: mc [FLAGS] COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...]
COMMANDS: alias manage server credentials in configuration file ls list buckets and objects mb make a bucket rb remove a bucket cp copy objects mv move objects rm remove object(s) mirror synchronize object(s) to a remote site cat display object contents head display first 'n' lines of an object pipe stream STDIN to an object find search for objects sql run sql queries on objects stat show object metadata tree list buckets and objects in a tree format du summarize disk usage recursively retention set retention for object(s) legalhold manage legal hold for object(s) support support related commands share generate URL for temporary access to an object version manage bucket versioning ilm manage bucket lifecycle encrypt manage bucket encryption config event manage object notifications watch listen for object notification events undo undo PUT/DELETE operations anonymous manage anonymous access to buckets and objects tag manage tags for bucket and object(s) diff list differences in object name, size, and date between two buckets replicate configure server side bucket replication admin manage MinIO servers update update mc to latest release Conclusion. Tha marks the end of this guide. We have gone through how to deploy and manage MinIO Storage clusters on Kubernetes. We have created a persistent volume, persistent volume claim, and a MinIO storage cluster. I hope this was significant.
0 notes
Text
Decision Trees
Decision tree: a mechanism for making decisions based on a series of "if" factors
Root node – the decision that matters the most, ergo the decision that precedes all others
Leaf node – the last outcome of a decision tree branch
format: if (node1) and (node2) and (node3)...and (node n), then (leaf node: final decision)
determining the root node: intuitive in a simple human context BUT with data, one must use an algorithm
utilize mechanisms from previous studies (i.e., data visualization with histograms, regression) to find one that gives the most classifying information
each node is "greedily constructed," i.e., the model extracts as much information as it can for an individual node without thinking about info that could be derived from future nodes
so the decision tree algorithm is run recursively, again and again, until the data is sifted perfectly into individual categories
you can use the same features over and over again (recursive!!) to separate out stat at different levels of "purity"
each time there is new data, you can run them through the tree and observe where they end up!
growing a tree
split the parent node and pick the feature that results in the largest "goodness of fit"
repeat until the child nodes are pure, i.e., until the "goodness of fit" measure <= 0
preventing tree overfitting
overfitting: too pure, too in-depth tree, too many nodes that are specifically fitted to the given dataset rather than the real-world function
set a depth cut-off (max tree depth)
set a min. number of data points in each node
stop growing tree if further splits are not statistically significant
and cost-complexity pruning: utilizing some alpha regularization threshold to prevent overfit
cost-complexity pruning
regularizing the decision tree via alpha at the cost of tree purity
so as alpha increases, impurity also increases. number of nodes and depth of tree decrease
real-world accuracy increases to a certain point before decreasing – as prevention of overfitting with regularization becomes underfitting
1 note · View note
data-node2 · 2 years
Text
Tumblr media
Source: Johnny Mnemonic
91 notes · View notes
cybervermin · 5 years
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
My Tumblr Crushes:
o-blivia
fvtvre-p0rn
drkftr
graylok
omgbulrefol
data-node2
rhubarbes
dustrial-inc
xbejonson
28 notes · View notes
devran · 4 years
Photo
Tumblr media
source
23 notes · View notes
roguetelemetry · 5 years
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
joe_kanno on instagram
syd mead hot wheels Sentinel 400 Limo custom painted & weathered
originally posted by http://data-node2.tumblr.com/
568 notes · View notes
hemplord · 4 years
Text
rules: answer 20 questions, then tag 20 bloggers you want to get to know better
i was tagged by @tessaigavirtual
name: fawn
nickname(s): fern, phaubio, fff, fern the taco, motherfawn, fawnthemom, baby deer
zodiac sign: libra
height: 5′8″
languages: english, lil bit of french & little bit of sign language
nationality: american
fav season: fall
fav song: too many to name but right now but alrighty aphrodite- peach pit, lover chanting- little dragon, diamonds- mat kerekes
fav scent: weed, rain, fresh grass, fresh bread
fav colour: red
fav animal: kiwi
fav fictional character: kahlan amnell
coffee, tea, or hot chocolate: coffee
average sleep hours: 5-9
dog or cat person: cat
number of blankets you sleep with: one
dream trip: japan
blog established: 2012
following/followers: idr off the top of my head
random fact: I have a stripper pole in my bedroom
I tag
@exoneon @bluue-hydrangea @data-node2 @mirayama @love-personal @yesterdaysprint @hallucination @elliipses @vhspositive @queerxoh
1 note · View note
globalmediacampaign · 3 years
Text
Inconsistent voting in PXC
AKA Cluster Error Voting What is Cluster Error Voting (CEV)? “Cluster Error Voting is a new feature implemented by Alexey Yurchenko, and it is a protocol for nodes to decide how the cluster will react to problems in replication. When one or several nodes have an issue to apply an incoming transaction(s) (e.g. suspected inconsistency), this new feature helps. In a 5-node cluster, if 2-nodes fail to apply the transaction, they get removed and a DBA can go in to fix what went wrong so that the nodes can rejoin the cluster. (Seppo Jaakola)” This feature was ported to Percona PXC in version 8.0.21, and as indicated above, it is about increasing the resilience of the cluster especially when TWO nodes fail to operate and may drop from the cluster abruptly. The protocol is activated in a cluster with any number of nodes.   Before CEV if a node has a problem/error during a transaction, the node having the issue would just report the error in his own log and exit the cluster: 2021-04-23T15:18:38.568903Z 11 [ERROR] [MY-010584] [Repl] Slave SQL: Could not execute Write_rows event on table test.test_voting; Duplicate entry '21' for key 'test_voting.PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 0, Error_code: MY-001062 2021-04-23T15:18:38.568976Z 11 [Warning] [MY-000000] [WSREP] Event 3 Write_rows apply failed: 121, seqno 16 2021-04-23T15:18:38.569717Z 11 [Note] [MY-000000] [Galera] Failed to apply write set: gtid: 224fddf7-a43b-11eb-84d5-2ebf2df70610:16 server_id: d7ae67e4-a43c-11eb-861f-8fbcf4f1cbb8 client_id: 40 trx_id: 115 flags: 3 2021-04-23T15:18:38.575439Z 11 [Note] [MY-000000] [Galera] Closing send monitor... 2021-04-23T15:18:38.575578Z 11 [Note] [MY-000000] [Galera] Closed send monitor. 2021-04-23T15:18:38.575647Z 11 [Note] [MY-000000] [Galera] gcomm: terminating thread 2021-04-23T15:18:38.575737Z 11 [Note] [MY-000000] [Galera] gcomm: joining thread 2021-04-23T15:18:38.576132Z 11 [Note] [MY-000000] [Galera] gcomm: closing backend 2021-04-23T15:18:38.577954Z 11 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node view (view_id(NON_PRIM,3206d174,5) memb { 727c277a,1 } joined { } left { } partitioned { 3206d174,1 d7ae67e4,1 } ) 2021-04-23T15:18:38.578109Z 11 [Note] [MY-000000] [Galera] PC protocol downgrade 1 -> 0 2021-04-23T15:18:38.578158Z 11 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node view ((empty)) 2021-04-23T15:18:38.578640Z 11 [Note] [MY-000000] [Galera] gcomm: closed 2021-04-23T15:18:38.578747Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 While the other nodes, will “just” report the node as out of the view: 2021-04-23T15:18:38.561402Z 0 [Note] [MY-000000] [Galera] forgetting 727c277a (tcp://10.0.0.23:4567) 2021-04-23T15:18:38.562751Z 0 [Note] [MY-000000] [Galera] Node 3206d174 state primary 2021-04-23T15:18:38.570411Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node view (view_id(PRIM,3206d174,6) memb { 3206d174,1 d7ae67e4,1 } joined { } left { } partitioned { 727c277a,1 } ) 2021-04-23T15:18:38.570679Z 0 [Note] [MY-000000] [Galera] Save the discovered primary-component to disk 2021-04-23T15:18:38.574592Z 0 [Note] [MY-000000] [Galera] forgetting 727c277a (tcp://10.0.0.23:4567) 2021-04-23T15:18:38.574716Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2 2021-04-23 With CEV we have a different process. Let us review it with images first. Let us start with a cluster based on: 3 Nodes where only one works as Primary. Primary writes and as expected writestes are distributed on all nodes. insert into test_voting values(null,REVERSE(UUID()), NOW()); select * from test_voting; +----+--------------------------------------+---------------------+ | id | what | when | +----+--------------------------------------+---------------------+ | 3 | 05de43720080-938a-be11-305a-6d135601 | 2021-04-24 14:43:34 | | 6 | 05de43720080-938a-be11-305a-7eb60711 | 2021-04-24 14:43:36 | | 9 | 05de43720080-938a-be11-305a-6861c221 | 2021-04-24 14:43:37 | | 12 | 05de43720080-938a-be11-305a-d43f0031 | 2021-04-24 14:43:38 | | 15 | 05de43720080-938a-be11-305a-53891c31 | 2021-04-24 14:43:39 | +----+--------------------------------------+---------------------+ 5 rows in set (0.00 sec) Some inexperienced DBA does manual operation on a secondary using the very unsafe feature wsrep_on. And then by mistake or because he did not understand what he is doing: insert into test_voting values(17,REVERSE(UUID()), NOW()); select * from test_voting; +----+--------------------------------------+---------------------+ | id | what | when | +----+--------------------------------------+---------------------+ | 3 | 05de43720080-938a-be11-305a-6d135601 | 2021-04-24 14:43:34 | | 6 | 05de43720080-938a-be11-305a-7eb60711 | 2021-04-24 14:43:36 | | 9 | 05de43720080-938a-be11-305a-6861c221 | 2021-04-24 14:43:37 | | 12 | 05de43720080-938a-be11-305a-d43f0031 | 2021-04-24 14:43:38 | | 15 | 05de43720080-938a-be11-305a-53891c31 | 2021-04-24 14:43:39 | | 16 | 05de43720080-a39a-be11-405a-82715600 | 2021-04-24 14:50:17 | | 17 | 05de43720080-a39a-be11-405a-f9d62e22 | 2021-04-24 14:51:14 | | 18 | 05de43720080-a39a-be11-405a-f5624662 | 2021-04-24 14:51:20 | | 19 | 05de43720080-a39a-be11-405a-cd8cd640 | 2021-04-24 14:50:23 | +----+--------------------------------------+---------------------+ Which of course is not in line with the rest of the cluster, that still has the previous data. Then our guy put the node back: At this point the Primary does another insert in that table and: Houston we have a problem!  The secondary node already has the entry with that ID and cannot perform the insert: 2021-04-24T13:52:51.930184Z 12 [ERROR] [MY-010584] [Repl] Slave SQL: Could not execute Write_rows event on table test.test_voting; Duplicate entry '18' for key 'test_voting.PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 0, Error_code: MY-001062 2021-04-24T13:52:51.930295Z 12 [Warning] [MY-000000] [WSREP] Event 3 Write_rows apply failed: 121, seqno 4928120 But instead of exit from the cluster it will raise a verification through voting: 2021-04-24T13:52:51.932774Z 0 [Note] [MY-000000] [Galera] Member 0(node2) initiates vote on ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120,878ded7898c83a72: Duplicate entry '18' for key 'test_voting.PRIMARY', Error_code: 1062; 2021-04-24T13:52:51.932888Z 0 [Note] [MY-000000] [Galera] Votes over ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120: 878ded7898c83a72: 1/3 Waiting for more votes. 2021-04-24T13:52:51.936525Z 0 [Note] [MY-000000] [Galera] Member 1(node3) responds to vote on ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120,0000000000000000: Success 2021-04-24T13:52:51.936626Z 0 [Note] [MY-000000] [Galera] Votes over ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120: 0000000000000000: 1/3 878ded7898c83a72: 1/3 Waiting for more votes. 2021-04-24T13:52:52.003615Z 0 [Note] [MY-000000] [Galera] Member 2(node1) responds to vote on ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120,0000000000000000: Success 2021-04-24T13:52:52.003722Z 0 [Note] [MY-000000] [Galera] Votes over ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120: 0000000000000000: 2/3 878ded7898c83a72: 1/3 Winner: 0000000000000000 As you can see each node inform the cluster about the success or failure of the operation, the majority wins. Once the majority had identified the operation was legit, as such, the node that ask for the voting will need to get out from the cluster: 2021-04-24T13:52:52.038510Z 12 [ERROR] [MY-000000] [Galera] Inconsistency detected: Inconsistent by consensus on ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120 at galera/src/replicator_smm.cpp:process_apply_error():1433 2021-04-24T13:52:52.062666Z 12 [Note] [MY-000000] [Galera] Closing send monitor... 2021-04-24T13:52:52.062750Z 12 [Note] [MY-000000] [Galera] Closed send monitor. 2021-04-24T13:52:52.062796Z 12 [Note] [MY-000000] [Galera] gcomm: terminating thread 2021-04-24T13:52:52.062880Z 12 [Note] [MY-000000] [Galera] gcomm: joining thread 2021-04-24T13:52:52.063372Z 12 [Note] [MY-000000] [Galera] gcomm: closing backend 2021-04-24T13:52:52.085853Z 12 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node view (view_id(NON_PRIM,65a111c6-bb0f,23) memb { 65a111c6-bb0f,2 } joined { } left { } partitioned { aae38617-8dd5,2 dc4eaa39-b39a,2 } ) 2021-04-24T13:52:52.086241Z 12 [Note] [MY-000000] [Galera] PC protocol downgrade 1 -> 0 2021-04-24T13:52:52.086391Z 12 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node view ((empty)) 2021-04-24T13:52:52.150106Z 12 [Note] [MY-000000] [Galera] gcomm: closed 2021-04-24T13:52:52.150340Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1 It is also nice to notice that now we have a decent level of information about what happened also in the other nodes, the log below is from the Primary: 2021-04-24T13:52:51.932829Z 0 [Note] [MY-000000] [Galera] Member 0(node2) initiates vote on ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120,878ded7898c83a72: Duplicate entry '18' for key 'test_voting.PRIMARY', Error_code: 1062; 2021-04-24T13:52:51.978123Z 0 [Note] [MY-000000] [Galera] Votes over ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120: … 2021-04-24T13:52:51.981647Z 0 [Note] [MY-000000] [Galera] Votes over ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120: 0000000000000000: 2/3 878ded7898c83a72: 1/3 Winner: 0000000000000000 2021-04-24T13:52:51.981887Z 11 [Note] [MY-000000] [Galera] Vote 0 (success) on ab5deb8e-389d-11eb-b1c0-36eca47bacf0:4928120 is consistent with group. Continue. 2021-04-24T13:52:52.064685Z 0 [Note] [MY-000000] [Galera] declaring aae38617-8dd5 at tcp://10.0.0.31:4567 stable 2021-04-24T13:52:52.064885Z 0 [Note] [MY-000000] [Galera] forgetting 65a111c6-bb0f (tcp://10.0.0.21:4567) 2021-04-24T13:52:52.066916Z 0 [Note] [MY-000000] [Galera] Node aae38617-8dd5 state primary 2021-04-24T13:52:52.071577Z 0 [Note] [MY-000000] [Galera] Current view of cluster as seen by this node view (view_id(PRIM,aae38617-8dd5,24) memb { aae38617-8dd5,2 dc4eaa39-b39a,2 } joined { } left { } partitioned { 65a111c6-bb0f,2 } ) 2021-04-24T13:52:52.071683Z 0 [Note] [MY-000000] [Galera] Save the discovered primary-component to disk 2021-04-24T13:52:52.075293Z 0 [Note] [MY-000000] [Galera] forgetting 65a111c6-bb0f (tcp://10.0.0.21:4567) 2021-04-24T13:52:52.075419Z 0 [Note] [MY-000000] [Galera] New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2 At this point a DBA can start to investigate and manually fix the inconsistency and have the node rejoin the cluster. In the meanwhile the rest of the cluster continue to operate: +----+--------------------------------------+---------------------+ | id | what | when | +----+--------------------------------------+---------------------+ | 3 | 05de43720080-938a-be11-305a-6d135601 | 2021-04-24 14:43:34 | | 6 | 05de43720080-938a-be11-305a-7eb60711 | 2021-04-24 14:43:36 | | 9 | 05de43720080-938a-be11-305a-6861c221 | 2021-04-24 14:43:37 | | 12 | 05de43720080-938a-be11-305a-d43f0031 | 2021-04-24 14:43:38 | | 15 | 05de43720080-938a-be11-305a-53891c31 | 2021-04-24 14:43:39 | | 18 | 05de43720080-938a-be11-405a-d02c7bc5 | 2021-04-24 14:52:51 | +----+--------------------------------------+---------------------+ Conclusion Cluster Error Voting (CEV), is a nice feature to have. It helps to understand better what goes wrong and it increases the stability of the cluster, that with the voting has a better way to manage the node expulsion. Another aspect is the visibility, never underestimate the fact an information is available also on other nodes. Having it available on multiple nodes may help investigations in case the log on the failing node gets lost (for any reasons). We still do not have active tuple certification, but is a good step, especially given the history we have seen of data drift in PXC/Galera in these 12 years of utilization. My LAST comment, is that while I agree WSREP_ON can be a very powerful tool in the hands of experts as indicated in my colleague blog https://www.percona.com/blog/2019/03/25/how-to-perform-compatible-schema-changes-in-percona-xtradb-cluster-advanced-alternative/ . That option remains DANGEROUS, and you should never use it UNLESS your name is Przemysław Malkowski and you really know what you are doing.   Great MySQL to everybody! References https://www.percona.com/doc/percona-xtradb-cluster/8.0/release-notes/Percona-XtraDB-Cluster-8.0.21-12.1.html https://youtu.be/LbaCyr9Soco http://www.tusacentral.com/joomla/index.php/mysql-blogs/236-inconsistent-voting-in-pxc
0 notes
cluboftigerghost · 7 years
Photo
Tumblr media
data-node2: snek.  http://ift.tt/2xeKE2F
1 note · View note
phantom--planet · 3 years
Text
Tumblr media
I got tagged by @idyll-ism for a selfie so here's a tired slightly buzzed selfie.
I'll tag:
@eliminated @understands @data-node2 @ishanijasmin @goodenoughforjazz @luneri
4 notes · View notes
programmingsolver · 5 years
Text
Hash Tables Solution
Part I: The Hash Table
Your textbook introduces a chained hash table class, Table. You are to download node2.h, table2.h,
and complete the hash table implementation.
Part II: The Customer Class
*Create a class Customer which implements the specification below:
Instance data needed to be maintained by class Customer:
a Customer’s name (a string)
a Customer’s address (a string)
a Customer’s…
View On WordPress
0 notes
myprogrammingsolver · 5 years
Text
Project 7 Hash Tables Solution
Project 7 Hash Tables Solution
Part I:   The Hash Table
  Your textbook introduces a chained hash table class,  Table.  You are to download node2.h, table2.h,
and complete the hash table implementation.
  Part II:   The Customer Class
  *Create  a class Customer which implements the specification below:
  Instance data needed to be maintained by class Customer:
a Customer’s name (a string)
a Customer’s address (a string)
a…
View On WordPress
0 notes
data-node2 · 3 years
Text
Tumblr media
Source: edward delandre Xenomorph.
92 notes · View notes
edulissy · 5 years
Text
Project 7 Hash Tables Solution
Project 7 Hash Tables Solution
Part I:   The Hash Table
  Your textbook introduces a chained hash table class,  Table.  You are to download node2.h, table2.h,
and complete the hash table implementation.
  Part II:   The Customer Class
  *Create  a class Customer which implements the specification below:
  Instance data needed to be maintained by class Customer:
a Customer’s name (a string)
a Customer’s address (a string)
a…
View On WordPress
0 notes
devran · 4 years
Photo
Tumblr media
Source
24 notes · View notes
sysnotes · 5 years
Text
Messy Notes on CoreOS MatchBox
CoreOS Matchbox Setup Notes.
dnsmasq
interface=eth1 bind-interfaces dhcp-range=10.16.0.10,10.16.0.99,255.255.255.0,24h dhcp-option=option:router,10.16.0.1 dhcp-boot=pxelinux.0 enable-tftp tftp-root=/srv/tftp dhcp-match=gpxe,175 # gPXE sends a 175 option. dhcp-boot=net:#gpxe,undionly.kpxe dhcp-boot=http://10.16.0.1/boot.ipxe address=/node1/10.16.0.101 address=/node2/10.16.0.102 address=/node3/10.16.0.103
profiles json:
{ "id": "bootkube", "name": "bootkube", "cloud_id": "", "ignition_id": "bootkube.yml", "generic_id": "", "boot": { "kernel": "/assets/vmlinuz", "initrd": ["/assets/cpio.gz"], "args": [ "root=/dev/vda1", "coreos.config.url=http://10.16.0.1/ignition?uuid=${uuid}&mac=${mac:hexhyp}", "coreos.first_boot=yes", "coreos.autologin" ] } }
groups json:
{ "name": "bootkube1", "profile": "bootkube", "selector": { "mac": "52:54:00:90:c3:6e" }, "metadata": { "domain_name": "node1", "ADVERTISE_IP": "10.16.0.101", "SERVER_IP": "10.16.0.1", "etcd_initial_cluster": "node1=http://10.16.0.101:2380,node2=http://10.16.0.102:2380,node3=http://10.16.0.103:2380", "etcd_name": "node1", "k8s_dns_service_ip": "10.3.0.10" } }
ignitons yml:
passwd: users: - name: core ssh_authorized_keys: - ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFTHetURpsQ2fkYXhAGMPDPArd4ubKfwRFvtcXtcp/PAnO8LFg4xQCtUbpgj4KoLYZEXblz/woXlm4coXT3C9Sg= networkd: units: - name: 005-eth0.network contents: | [Match] Name=eth0 [Network] DNS={{.SERVER_IP}} Address={{.ADVERTISE_IP}}/24 Gateway={{.SERVER_IP}} etcd: version: 3.3.9 name: {{.etcd_name}} advertise_client_urls: http://{{.ADVERTISE_IP}}:2379 initial_advertise_peer_urls: http://{{.ADVERTISE_IP}}:2380 listen_client_urls: http://0.0.0.0:2379 listen_peer_urls: http://0.0.0.0:2380 initial_cluster: {{.etcd_initial_cluster}} #ca_file: /etc/ssl/certs/etcd/etcd/server-ca.crt #cert_file: /etc/ssl/certs/etcd/etcd/server.crt #key_file: /etc/ssl/certs/etcd/etcd/server.key #peer_ca_file: /etc/ssl/certs/etcd/etcd/peer-ca.crt #peer_cert_file: /etc/ssl/certs/etcd/etcd/peer.crt #peer_key_file: /etc/ssl/certs/etcd/etcd/peer.key systemd: units: - name: update-engine.service mask: true - name: locksmithd.service mask: true - name: etcd-member.service enable: true - name: docker.service enable: true - name: rngd.service enable: true contents: | [Unit] Description=Hardware RNG Entropy Gatherer Daemon [Service] ExecStart=/usr/sbin/rngd -f -r /dev/urandom [Install] WantedBy=multi-user.target - name: get-assets.service enable: true contents: | [Unit] Description=Get Bootkube assets [Service] Type=oneshot ExecStart=/usr/bin/wget --cut-dirs=1 -R "index.html*" --recursive -nH http://{{.SERVER_IP}}/assets -P /opt/bootkube/assets #ExecStartPre=/usr/bin/wget --cut-dirs=2 -R "index.html*" --recursive -nH http://10.16.0.1/assets/tls -P /etc/ssl/certs/etcd #ExecStartPre=/usr/bin/chown etcd:etcd -R /etc/ssl/etcd #ExecStartPre=/usr/bin/find /etc/ssl/etcd -type f -exec chmod 600 {} \; [Install] WantedBy=multi-user.target - name: kubelet.service enable: true contents: | [Unit] Description=Kubelet via Hyperkube ACI [Service] EnvironmentFile=/etc/kubernetes/kubelet.env Environment="RKT_RUN_ARGS=--uuid-file-save=/var/cache/kubelet-pod.uuid \ --volume=resolv,kind=host,source=/etc/resolv.conf \ --mount volume=resolv,target=/etc/resolv.conf \ --volume var-lib-cni,kind=host,source=/var/lib/cni \ --mount volume=var-lib-cni,target=/var/lib/cni \ --volume opt-cni-bin,kind=host,source=/opt/cni/bin \ --mount volume=opt-cni-bin,target=/opt/cni/bin \ --volume var-log,kind=host,source=/var/log \ --mount volume=var-log,target=/var/log \ --insecure-options=image" ExecStartPre=/bin/mkdir -p /opt/cni/bin ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests ExecStartPre=/bin/mkdir -p /var/lib/cni ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt" ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/cache/kubelet-pod.uuid ExecStart=/usr/lib/coreos/kubelet-wrapper \ --anonymous-auth=false \ --cluster-dns={{.k8s_dns_service_ip}} \ --cluster-domain=cluster.local \ --client-ca-file=/etc/kubernetes/ca.crt \ --pod-manifest-path=/etc/kubernetes/manifests \ --feature-gates=AttachVolumeLimit=false \ --cni-conf-dir=/etc/kubernetes/cni/net.d \ --exit-on-lock-contention \ --kubeconfig=/etc/kubernetes/kubeconfig \ --lock-file=/var/run/lock/kubelet.lock \ --network-plugin=cni \ --node-labels=node-role.kubernetes.io/master \ --register-with-taints=node-role.kubernetes.io/master=:NoSchedule ExecStop=-/usr/bin/rkt stop --uuid-file=/var/cache/kubelet-pod.uuid Restart=always RestartSec=10 [Install] WantedBy=multi-user.target - name: bootkube.service #enable: true contents: | [Unit] Description=Bootstrap a Kubernetes control plane with a temp api-server [Service] Type=simple WorkingDirectory=/opt/bootkube ExecStart=/opt/bootkube/bootkube-start [Install] WantedBy=multi-user.target storage: disks: - device: /dev/vda wipe_table: true partitions: - label: ROOT filesystems: - name: root mount: device: "/dev/vda1" format: "ext4" create: force: true options: - "-LROOT" files: - path: /etc/kubernetes/kubeconfig filesystem: root mode: 0644 contents: remote: url: http://{{.SERVER_IP}}/assets/auth/kubeconfig - path: /etc/kubernetes/kubelet.env filesystem: root mode: 0644 contents: inline: | KUBELET_IMAGE_URL=docker://gcr.io/google_containers/hyperkube KUBELET_IMAGE_TAG=v1.12.1 - path: /etc/hostname filesystem: root mode: 0644 contents: inline: {{.domain_name}} - path: /etc/sysctl.d/max-user-watches.conf filesystem: root contents: inline: | fs.inotify.max_user_watches=16184 - path: /opt/bootkube/bootkube-start filesystem: root mode: 0544 contents: inline: | #!/bin/bash set -e BOOTKUBE_ACI="${BOOTKUBE_ACI:-quay.io/coreos/bootkube}" BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.14.0}" #BOOTKUBE_VERSION="${BOOTKUBE_VERSION:-v0.9.1}" BOOTKUBE_ASSETS="${BOOTKUBE_ASSETS:-/opt/bootkube/assets}" exec /usr/bin/rkt run \ --trust-keys-from-https \ --volume assets,kind=host,source=$BOOTKUBE_ASSETS \ --mount volume=assets,target=/assets \ --volume bootstrap,kind=host,source=/etc/kubernetes \ --mount volume=bootstrap,target=/etc/kubernetes \ $RKT_OPTS \ ${BOOTKUBE_ACI}:${BOOTKUBE_VERSION} \ --net=host \ --dns=host \ --exec=/bootkube -- start --asset-dir=/assets "$@"
bootkube render --asset-dir=bootkube-assets --api-servers=https://10.16.0.101:6443,https://10.16.0.102:6443,https://10.16.0.103:6443 --api-server-alt-names=IP=10.16.0.101,IP=10.16.0.102,IP=10.16.0.103 --etcd-servers=http://10.16.0.101:2379,http://10.16.0.102:2379,http://10.16.0.103:2379 --network-provider experimental-canal
0 notes