Initially I wrote this script & job to myself, but I found that it can be useful for others as well. This article is not a kind of "use as is", and need some deeper knowledge to adapt it to your environment. So use my post as a skeleton, and modify according to your needs.
Nowadays maybe the most advanced and widly used log management and analizis system is the ELK stack. I have to mention Graylog and Grafana Loki which are also great and advanced tools for montioring your environments and collect log files from them.
There is another enterprise ready and feature rich log management system which based on Elasticsearch and Kibana: OpenSearch. If you are looking for a free alternaive to Elasticsearch you may want to give OpenSearch a try. I'm going to post about OpenSearch as well, but at this time I want to show you a method to install Elasticsearch & Kibana on your Kubernetes cluster.
In this guide I show you how I use mkdocs in my Kubernetes cluster.
There are uncountable ways to do something similar, but I think this post can be useful for you.
The main ascpect of my solution is to build the documentation in an init container, and serve the page with nginx. This way the only thing you have to do is rollout the deployment after new version of your mkdocs is released (pushed to your repository).
But if you have a Kubernetes cluster you may want to install Jitsi on you cluster.
I found another article about thist topic, but it is a little different from my solution: https://sesamedisk.com/video-conferencing-with-jitsi-on-k8s/
The most notifable different that I use only one deployment for all component (web, prosody, jicofo and jvb). (One pod, multiple container.) However this way make almost impossible to scale your jitsi instance, but far enough for minimal deployment. Running multiple instance of Jitsi is not in my scope at this time.
Unfortunately the Official Jitsi documentation does not say too much about scaling: https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-manual
I want to show all the logs from containers in one place.
I already have a Kubernetes cluster at home, on which I have Kibana and Elasticsearch deployed for the cluster logging. It is obvious to use my already existing logging solution to collect logs from the Docker hosts.
I use to use Duplicati as backup solution for ages. It's a really good backup solution with a handy user interface. Everything can be done on it's interface.
My requirements toward a backup software:
Differential backups on daily basis
Support linux operation system
Google Drive support for storing backup remotly
Encryption, of course
Be as lightweight as possible
I know Google Drive support a bit unusall, but I have 2TB storage and I don't want to pay for an other service. Duplicati fulfills all my requirments. There are only two weakness because of why I looked for another solution: resource consumption and speed:
Duplicati can be really slow on restore from Google Drive if you store a lot of files.
Duplicati uses [Mono](https://www.mono-project.com. (Cross platform, open source .NET framework). Running .NET on linux is not my taste, and sometimes consumes too much resource, especially on a Raspberry PI3.
After some hours of Googling and trying some softwares I found Borg Backup.
The only missing feature is the Google Drive, but it can be achieved with rclone.
I don't want to write pages about my choice, features, advantages, disadvantages, etc. If you are reading this article you probably want to try or use Borg.
I have several switches in my house and garden which can be controlled from Home Assistant. But sometimes I forget to turn them off. So there are some lights in the garden which should turn off after a certain time. For example I almost always forgot to turn off the light which directed to front gate of my garden after my arrives home.
All of my devices are flahed with Tasmota firmware. Tasmota has a built in command to turn off the relay after certain period of time called "Pulsetime":
Quote
PulseTime Display the amount of PulseTime remaining on the corresponding Relay
Set the duration to keep Relay ON when Power ON command is issued. After this amount of time, the power will be turned OFF.
0 / OFF = disable use of PulseTime for Relay
1..111 = set PulseTime for Relay in 0.1 second increments
112..64900 = set PulseTime for Relay, offset by 100, in 1 second increments. Add 100 to desired interval in seconds, e.g., PulseTime 113 = 13 seconds and PulseTime 460 = 6 minutes (i.e., 360 seconds)
Note if you have more than 8 relays:
Defined PulseTime for relays <1-8> will also be active for correspondent Relay <9-16>.
Matrix is an open standard for interoperable, decentralised, real-time communication over IP. It can be used to power Instant Messaging, VoIP/WebRTC signalling, Internet of Things communication - or anywhere you need a standard HTTP API for publishing and subscribing to data whilst tracking the conversation history.
So we are about to install a private real time messaging (Chat) server. It can be useful for you if you want to replace Whatsapp, Telegram, FB messenger, Viber, etc, or just want your own messaging server. Or if you don't trust in these services and want a service which focuses on your privacy. Another question is how your partners with whom you want to chat trust your server.
I'm wondering if you have ever thought about having your own messaging server. If the answer is yes, it's time to build one. I hope you will easily achieve this with the help of this article.
First and most important to have a valid domain name. If you don't have any you can pick up one free from DuckDNS
Installed Kubernets cluster
Public Internet access.
At least 2 GB of free RAM.
I assume you build this server for your family and friends, and don't want to share with the whole World. For some tens of people you don't need to purchase an expensive server, but according to the number of attachments (file, pictures, videos,etc) you may need some hundreds of GB disk space.
Nowadays everybody talking and writing about containers, Docker, Kubernetes, OpenShift, etc. I don't want to explain here what are these meaning, instead give some practical use cases. I'm always testing my solutions at home with low cost HWs.
I have an article about installing single node Kubernetes cluster, but now I step back to Docker containers. This is because deploying a container orchestrator not always the goal. This article could be useful for home users, or developers who want learn about containers.