This is probably of interest to no one.. Magicka now compiles on VoidLinux musl version. Possibly other linux using the musl C library as well.
On Oct 16th 12:30 am apam said...
This is probably of interest to no one.. Magicka now compiles on VoidLinux musl version. Possibly other linux using the musl C lib well.
Nice, if you haven't yet, you might try AlpineLinux which uses musl.
You could probably put Magicka in a 20 meg docker or something :D
I think Deon said something about possibly dockerizing Magicka at some point, which I guess would be cool. (I'm not really clued in on docker so don't really understand the appeal).
On 10/17/18, apam said the following...
I think Deon said something about possibly dockerizing Magicka at point, which I guess would be cool. (I'm not really clued in on d don't really understand the appeal).
I will have a crack at Dockerizing Magicka - and if it works with
alphine that will be great - it'll make a very small container.
On 10-17-18 06:03, Deon George wrote to apam <=-
I also need to share why (I think) Docker is great - I pretty much run everything under docker :)
On 10-17-18 06:03, Deon George wrote to apam <=-
I also need to share why (I think) Docker is great - I pretty muc everything under docker :)
That would be a good idea. The question is "Why should we bother?"
And go on, state your case. :)
I also need to share why (I think) Docker is great - I pretty much run everything under docker :)
On 10-17-18 17:21, apam wrote to Vk3jed <=-
That would be a good idea. The question is "Why should we bother?"
And go on, state your case. :)
LOL tony. You make me laugh, but only because I completely agree with
you. Not saying deon doesn't have a case, just that I for one would
like to know what this docker fuss is all about. Plus I may learn something important, and you never know it might turn me into a docker
fan :P
Speaking of docker and such fedora magazine published something similar
https://fedoramagazine.org/running-containers-with-podman/
if anyone knows about podman, I'd be interested.
Let me know if you need any changes made. If there are data paths that
are hard coded (ie not changable via bbs.ini - there may well be) it's a bug and should be fixed.
I also need to share why (I think) Docker is great - I pretty much ru everything under docker :)
That would be a good idea. The question is "Why should we bother?" And go on, state your case. :)
On 10-17-18 13:37, Deon George wrote to Vk3jed <=-
What docker gives me most is options. Lets say I want to move my app
from PC1 to PC2 - all I need to do is backup/restore (or clone) the
data from PC1 to PC2 (normally in a single root directory) - pull the container onto PC2 and away I go. The network address may have changed, but everything else is the same "from inside the container". The
container thinks it was "stopped" and then "started", but did not
realise it physically moved hosts.
That sounds pretty cool, actually. :) Yeah, I can see why you'd use Docker. Certainly worth a look. :)
On 10-18-18 02:22, Deon George wrote to Vk3jed <=-
On 10/18/18, Vk3jed said the following...
That sounds pretty cool, actually. :) Yeah, I can see why you'd use Docker. Certainly worth a look. :)
I have 3 Pi's at home (2 x B and 1 x B+) - and at one time I had mystic "floating" between the Pi's.
So if it was running on Pi1, and I pulled out the power, Mystic would restart on Pi2 or Pi3, depending on which one was "less busy". (No
other changes/reconfigs necessary.) In fact sometimes I would have to
go and "find" which one it was running on if I wanted to attach to the console.
Setting up something like this becomes a little more complicated that
the "home user" use case - because you need a persistent storage medium
to be available on each "node" - I used glusterfs to do this - it works well on the Pi. (Although I needed to learn more about glusterfs to address performance.)
I did look at glusterfs, but couldn't get my head around it.
On 10-18-18 10:31, Deon George wrote to Vk3jed <=-
On 10/18/18, Vk3jed said the following...
I did look at glusterfs, but couldn't get my head around it.
Think of glusterfs as a replicated filesystem. IE: In my example, my
files existing on all 3 Pi's (which is why I could power off 2 of them, and my BBS float back to Pi 1 and continue running - a copy of the data
is there.)
The side affect of gluster is that every I/O write is an I/O write x number of nodes across the network (and I had this all going over
Wifi). And by definition, the filesystem doesnt need to be local, in
which case every read and write is an I/O across the network x node of nodes.
Now I think gluster can be used for high I/O workloads (if I recall correclty, it's roots started in HPC - High Performance Cluster
computing) - but that requires some thought and consideration to
implement - the defaults out of the box performed "OK" but I didnt
explore if I could tweak it to improve it (eg: local writes with async network transfers - but then you have consistency concerns on the
remote replicas).
Sysop: | Weed Hopper |
---|---|
Location: | Clearwater, FL |
Users: | 14 |
Nodes: | 6 (0 / 6) |
Uptime: | 229:44:06 |
Calls: | 55 |
Calls today: | 1 |
Files: | 50,127 |
D/L today: |
21 files (2,409K bytes) |
Messages: | 275,332 |