if i remember correctly, i just replaced gitea with forgejo for image: in my docker-compose, and it just worked
it was a couple of versions back, so i don’t know if that still works
if i remember correctly, i just replaced gitea with forgejo for image: in my docker-compose, and it just worked
it was a couple of versions back, so i don’t know if that still works
I use proxmox and proxmox backup server (in a vm). I reinstall them both, and re-add lxc and vm and their drives from backup. has already worked once.
important files are additionaly synced to laptop and phone using syncthing.
proxmox backups (which are encrypted) are rcloned to backblaze for offsite backup
dovecot provides a proper shared imap server. But not all email clients allow moving emails between accounts (gmail and local email server), but Thunderbird does.
I can access the emails from any client
Not sure if it fits you, but personally I have set up a self hosted dovecot instance where i have moved old gmail emails to, using thunderbird as the client.
I use syncthing to copy important files between pc, phone and proxmox server. Syncthing can be set up with version control so it keeps old versions of files.
Only the proxmox server is properly backed up though. to a proxmox backup server running in a VM on said proxmox server. the encryptred backup files are copied to backblaze using rclone
Not sure if this is what you are looking for, but it works for me.
TLDR syncthing for copies between local machines, and proxmox backup server and backblaze for proper backups
You could take a look at one of the universal blue distros next time you want to try some linux https://universal-blue.org/
I use bazzite on my gaming pc and bluefin on my laptop. It is immutable linux, but the devs made the defaults really nice (for me at least)
I use miniflux. To read the feed I use Flux News on android. I don’t read the whole feed in the reader, but open the link
I think miniflux supports downloading the source, but I had to do it manually each time when I tried
I have used them since januar 2019, and I don’t have any complaints. I have only needed to restore backups once - it worked as well as could be expected.
Any issues with backups have always been on my side
I use dovecot for this. And thunderbird to actually move/archive the emails. I use caddy for many of my services, so I have pointed dovecot to caddys certificates (for “my.domain”), since it manages certificates through let’s encrypt. I had a plan to install postfix for sending internal emails from my self-hosted services, but it seemed like a bit of configuration and I got busy with other stuff
I made an excerpt from my docker-compose.yml, but you probably have to figure out some things on your own
version: '3.4'
services:
dovecot:
image: dovecot/dovecot:2.3.20
restart: unless-stopped
volumes:
- ./dovecot/:/etc/dovecot
- /mnt/storage/dovecot/mail:/srv/mail
- ./caddy/data/caddy/certificates/acme-v02.api.letsencrypt.org-directory/wildcard_.my.domain/wildcard_.my.domain.crt:/etc/ssl/cert.crt
- ./caddy/data/caddy/certificates/acme-v02.api.letsencrypt.org-directory/wildcard_.my.domain/wildcard_.my.domain.key:/etc/ssl/key.key
ports:
- 993:993
contents of ./dovecot folder:
dovecot.conf
passwords
contents of dovecot.conf (I think I searched online to find a good example, I don’t remember where from…)
## manage this file
mail_home=/srv/mail/%Lu
mail_location=sdbox:~/Mail
mail_uid=1000
mail_gid=1000
protocols = imap pop3 submission sieve lmtp
first_valid_uid = 1000
last_valid_uid = 1000
passdb {
driver = passwd-file
args = scheme=argon2i /etc/dovecot/passwords
}
ssl=yes
ssl_cert=</etc/ssl/cert.crt
ssl_key=</etc/ssl/key.key
namespace {
inbox = yes
separator = /
mailbox Drafts {
auto = subscribe
special_use = \Drafts
}
mailbox Sent {
auto = subscribe
special_use = \Sent
}
mailbox Spam {
auto = subscribe
special_use = \Junk
}
mailbox Trash {
auto = subscribe
special_use = \Trash
}
mailbox Archive {
auto = subscribe
special_use = \Archive
}
}
service lmtp {
inet_listener {
port = 24
}
}
listen = *
log_path=/dev/stdout
info_log_path=/dev/stdout
debug_log_path=/dev/stdout
The first generation Hyundai Ioniq 5 had solar roof (at least some models).
The first gen ioniq 5 also had a very low payload capacity, with stories of families who couldn’t legally be in the car at the same time without being over the capacity.
The reason, I’m told, is that supporting the solar roof reduced the payload capacity a lot.
Also, solar cells on a car doesn’t make much sense like others have already said.
I use Kvaesitso and have been using it for a long time now.
I wanted a search-based launcher with support for widgets. I was missing some features in the beginning, but I must admit I have forgotten what they were - so I guess that is a good sign 👌
Likes: Good search, OK widget, main screen can be as clean or dirty has you want
I am using a terramaster d6-320 connected with usb-c.
It has been running zfs disks for proxmox via a geekom a5 mini pc since February. It has lost contact with the drives twice so far, more than a month between each time so I don’t know the cause. I am mostly happy with the setup, but of course it is annoying when it fails
Gratulerer med dagen 🇳🇴🇳🇴🇳🇴
My email calendars I leave alone, but I use caldav for personal calendar and tasks.
I use radicale as caldav server, and tasks.org on android and thunderbird on computer. Tasks.org works very well
I also use silverbullet (silverbullet.md) for more complex todo lists
I use miniflux, and flux news app on android. It looks nice and works well (i posted about it some time ago https://lemmy.world/post/9574514 )
I am not missing any features, but I am not doing anything fancy. I have grouped the rss feeds, if that counts as filtering
I have used it for a long time now, and I don’t have an urge to try and find something better, like I do for some other self hosted stuff.
I might miss your target, but have you considered tasks.org android app + caldav?
I have been using silverbullet the last few months, but I struggle keeping up with its updates (too bleeding edge at the moment). I has a lot of nice features like all markdown, queries and templates.
Now I am back to tasks.org app + radicale self-hosted caldav server. For tasks it flows so well on android. for windows you need to use something that supports caldav, like thunderbird.
When silverbullet matures and if it is still fast and offline, I might go back. It has a lot of nice stuff going on. I still use for stuff like recipes and travel lists
I used it for a few years, but it broke a few times, and I had to search online and find an occ
command to fix it. It also could break if you didn’t upgrade regularly and skipped versions. Or you upgraded too quickly before a bug was hotfixed.
Maybe it is better now, but I looked into alternatives and found syncthing to be awesome (after I switched from iphone to android). I use samba share for cold storage. Syncthing can take a lot of space since it syncs all the files to all units
I ended up with syncthing + the default gallery app on my samsung galaxy phone. It works well for me.
But I don’t have a crazy amount of images, the phone storage needs to be large enough.
I just bought one, but I haven’t set it up yet. But it looks like it will fit me nicely based on apalrd video https://youtu.be/qML-ct2dGvQ
Yes it is correct. TLDR; threads run one code at the time, but can access same data. processes is like running python many times, and can run code simultaneously, but sharing data is cumbersome.
If you use multiple threads, they all run on the same python instance, and they can share memory (i.e. objects/variables can be shared). Because of GIL (explained by other comment), the threads cannot run at the same time. This is OK if you are IO bound, but not CPU bound
If you use multiprocessing, it is like running python (from terminal) multiple times. There is no shared memory, and you have a large overhead since you have to start up python many times. But if you have large calculations you can do in parallell that takes long time, it will be much faster than threads as it can use all cpu cores.
If these processes need to share data, it is more complicated. You need to use special functions to share data, like queues and pipes. If you need to share many MB of data, this takes a lot of time in my experience (10s of milliseconds).
If you need to do large calculations, using numpy functions or numba may be faster than multiple processes, due to good optimizations. But if you need to crunch a lot of data, multiprocessing is usually the way to go