IT/Clustered Nextcloud: Difference between revisions
< IT
Access restrictions were established for this page. If you see this message, you have no access to this page.
(added some ideas) |
No edit summary |
||
Line 1: | Line 1: | ||
=== Why? === | === Why? === | ||
* more robust, a server can go down and Nextcloud still runs (depending on implementation) | * more robust, a server can go down and Nextcloud still runs (depending on implementation) | ||
Revision as of 21:49, 15 May 2024
Why?
- more robust, a server can go down and Nextcloud still runs (depending on implementation)
Why not?
- needs more hardware (at least 3 nodes for e.g. Galera)
- more complex
Upgrade Paths
- Distributed DB
- MariaDB clustered with Galera
- it was only recommended to use only 1 DB node for writes with all instances (Galera does not support WRITE_COMMIT which Nextcloud depends on)
- that may have changed since, but there seems to be no public documentation for clustering for newer versions (since this is probably more for enterprise customers)
- read can be done from all DB nodes
- Distributed FS
- GlusterFS
- Ceph
- ...
- Alternatively just shared filestorage e.g. NFS
- Alternatively SyncThing (maybe more for a Active-Failover Setup with high latency inbetween?)
- Load Balancer/Failover
- HAProxy
- Keepalived
Ressources
- Example project: https://help.nextcloud.com/t/help-me-test-this-3-node-cluster/12863/48
- HAProxy: https://www.haproxy.com/blog/how-to-run-haproxy-with-docker
- Keepalived (VRRP service): https://www.keepalived.org/