Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Add capability in VitessShardTabletPool CRD to attach additional disks to each vttablet #633

Open
ajit-pendse opened this issue Oct 30, 2024 · 2 comments

Comments

@ajit-pendse
Copy link

Use cases where different disk is needed (e.g. for bin-logs) in each vttablet cannot be implemented with current structure. Capability similar to dataVolumeClaimTemplate used for data disk would be helpful to handle such use cases.

Current VitessShardTabletPool CRD allows attaching extraVolumes which can point to only one PVC per volume. This will lead to multiple pods trying to mount same PVC.

@frouioui
Copy link
Member

frouioui commented Dec 2, 2024

Hi @ajit-pendse, from my understanding the VitessShardTabletPool CRD has an ExtraVolumes fields which is a slice of Volume. Each Volume can be defined to point to any PVC or any other volume types. But, we are adding these ExtraVolumes to every pod created by VitessShardTabletPool (every pod in the shard), confirming your sentence:

This will lead to multiple pods trying to mount same PVC.

What behavior do you expect exactly? If you could enhance your yaml file with how you would define this per-pod extra volume would be helpful.

@frouioui frouioui added feature New Feature and removed feature New Feature labels Dec 2, 2024
@ajit-pendse
Copy link
Author

Thanks for looking into this @frouioui . The use case is - how to mount 2 or more disks for each vttablet. To clarify further - this is not expected to be a shared disk mounted on each vttablet but separate disks - one for each vttablet (apart from the one mounted using current dataVolumeClaimTemplate).

Related slack discussion thread - https://vitess.slack.com/archives/CNE9WP677/p1730206227730049

A simple solution can be to add a binlogVolumeClaimTemplate (similar to dataVolumeClaimTemplate)- to handle specific use case of additional disk mounted for bin logs. To make it more generic, dataVolumeClaimTemplate can be renamed and converted as an array, with each entry also specifying the mount point.

Sample YAML sections for this would look like this for first case (taken from VitessCluster CRD) -

partitionings:
-equal:
  ...
  shardTemplate:
    ...
    tabletPools:
    - cell: zone1
      vttablet:
      ...
      mysqld:
      ...
      dataVolumeClaimTemplate:
        storageClassName: vitess-data-sc
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 256Gi
      binlogVolumeClaimTemplate:
        storageClassName: vitess-binlog-sc
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 128Gi

Sample YAML sections for this would look like this for second case (taken from VitessCluster CRD) -

partitionings:
-equal:
  ...
  shardTemplate:
    ...
    tabletPools:
    - cell: zone1
      vttablet:
      ...
      mysqld:
      ...
      volumeClaimTemplates:
      - mountPoint: /vt/vtdataroot
        storageClassName: vitess-data-sc
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 256Gi
      - mountPoint: /vt/vtbinlogs
        storageClassName: vitess-binlog-sc
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 128Gi

mountPoint: /vt/vtbinlogs here will depend on configuration override added under mysqld.configOverrides.

Hope that clarifies the expected behavior. Happy to provide more details if required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants