• Magicka & Musl

    From apam@21:1/125 to All on Tuesday, October 16, 2018 14:24:24
    Hey

    This is probably of interest to no one.. Magicka now compiles on
    VoidLinux musl version. Possibly other linux using the musl C library as
    well.

    To compile on Linux using the musl C library, you will need musl-fts in addition to other dependencies. Then type make MUSL=1 www

    Why? why not :)

    Andrew

    --- MagickaBBS v0.12alpha (Linux/x86_64)
    * Origin: The Fat Sandwich - sandwich.hopto.org:2023 (21:1/125)
  • From NuSkooler@21:4/114 to apam on Tuesday, October 16, 2018 00:44:22
    On Oct 16th 12:30 am apam said...
    This is probably of interest to no one.. Magicka now compiles on VoidLinux musl version. Possibly other linux using the musl C library as well.

    Nice, if you haven't yet, you might try AlpineLinux which uses musl. You could probably put Magicka in a 20 meg docker or something :D



    --- ENiGMA 1/2 v0.0.9-alpha (linux; x64; 10.11.0)
    * Origin: PlaneT Afr0 ~ https://planetafr0.org (21:4/114)
  • From apam@21:1/125 to NuSkooler on Wednesday, October 17, 2018 14:10:18

    On Oct 16th 12:30 am apam said...
    This is probably of interest to no one.. Magicka now compiles on VoidLinux musl version. Possibly other linux using the musl C lib well.

    Nice, if you haven't yet, you might try AlpineLinux which uses musl.
    You could probably put Magicka in a 20 meg docker or something :D

    I think Deon said something about possibly dockerizing Magicka at some
    point, which I guess would be cool. (I'm not really clued in on docker so
    don't really understand the appeal).

    I might see if alpinelinux can run it though. Magicka has quiet a few dependencies these days (for better or worse).

    Andrew

    --- MagickaBBS v0.12alpha (Linux/x86_64)
    * Origin: The Fat Sandwich - sandwich.hopto.org:2023 (21:1/125)
  • From Deon George@21:2/116.1 to apam on Wednesday, October 17, 2018 06:03:54
    On 10/17/18, apam said the following...
    I think Deon said something about possibly dockerizing Magicka at some point, which I guess would be cool. (I'm not really clued in on docker so don't really understand the appeal).

    I will have a crack at Dockerizing Magicka - and if it works with alphine
    that will be great - it'll make a very small container.

    I also need to share why (I think) Docker is great - I pretty much run everything under docker :)

    ...deon

    --- Mystic BBS v1.12 A39 2018/04/21 (Raspberry Pi/32)
    * Origin: Chinwag | MysticBBS in Docker on a Pi! (21:2/116.1)
  • From apam@21:1/125 to Deon George on Wednesday, October 17, 2018 16:26:50
    On 10/17/18, apam said the following...
    I think Deon said something about possibly dockerizing Magicka at point, which I guess would be cool. (I'm not really clued in on d don't really understand the appeal).

    I will have a crack at Dockerizing Magicka - and if it works with
    alphine that will be great - it'll make a very small container.

    Let me know if you need any changes made. If there are data paths that
    are hard coded (ie not changable via bbs.ini - there may well be) it's a
    bug and should be fixed.

    Andrew

    --- MagickaBBS v0.12alpha (Linux/x86_64)
    * Origin: The Fat Sandwich - sandwich.hopto.org:2023 (21:1/125)
  • From Vk3jed@21:1/109 to Deon George on Wednesday, October 17, 2018 18:09:00
    On 10-17-18 06:03, Deon George wrote to apam <=-

    I also need to share why (I think) Docker is great - I pretty much run everything under docker :)

    That would be a good idea. The question is "Why should we bother?" And go on, state your case. :)


    ... A day without sunshine is like night.
    === MultiMail/Win v0.51
    --- SBBSecho 3.03-Linux
    * Origin: Freeway BBS Bendigo,Australia freeway.apana.org.au (21:1/109)
  • From apam@21:1/125 to Vk3jed on Wednesday, October 17, 2018 17:21:38
    On 10-17-18 06:03, Deon George wrote to apam <=-

    I also need to share why (I think) Docker is great - I pretty muc everything under docker :)

    That would be a good idea. The question is "Why should we bother?"
    And go on, state your case. :)

    LOL tony. You make me laugh, but only because I completely agree with
    you. Not saying deon doesn't have a case, just that I for one would like
    to know what this docker fuss is all about. Plus I may learn something important, and you never know it might turn me into a docker fan :P

    Speaking of docker and such fedora magazine published something similar

    https://fedoramagazine.org/running-containers-with-podman/

    if anyone knows about podman, I'd be interested.

    Andrew

    --- MagickaBBS v0.12alpha (Linux/x86_64)
    * Origin: The Fat Sandwich - sandwich.hopto.org:2023 (21:1/125)
  • From Al@21:4/106 to Deon George on Wednesday, October 17, 2018 00:55:58
    Re: Re: Magicka & Musl
    By: Deon George to apam on Wed Oct 17 2018 06:03 am

    I also need to share why (I think) Docker is great - I pretty much run everything under docker :)

    Please do.. I see 'docker' come up in conversations at times but have no idea what it is or if I should look into it.

    Ttyl :-),
    Al

    ... Alpo is 99 cents per can -- that's almost 7 dog dollars!
    --- SBBSecho 3.06-Linux
    * Origin: The Rusty MailBox - Penticton, BC Canada (21:4/106)
  • From Vk3jed@21:1/109 to apam on Wednesday, October 17, 2018 19:10:00
    On 10-17-18 17:21, apam wrote to Vk3jed <=-

    That would be a good idea. The question is "Why should we bother?"
    And go on, state your case. :)

    LOL tony. You make me laugh, but only because I completely agree with
    you. Not saying deon doesn't have a case, just that I for one would
    like to know what this docker fuss is all about. Plus I may learn something important, and you never know it might turn me into a docker
    fan :P

    Yeah, I'm just out to understand as well, and there's a chance I may become a docker fan too. :)

    Speaking of docker and such fedora magazine published something similar

    https://fedoramagazine.org/running-containers-with-podman/

    if anyone knows about podman, I'd be interested.

    Looks like there's multiple options for containers. :)


    ... Ya know, some days life is just one non sequitur after catfish.
    === MultiMail/Win v0.51
    --- SBBSecho 3.03-Linux
    * Origin: Freeway BBS Bendigo,Australia freeway.apana.org.au (21:1/109)
  • From Deon George@21:2/116.1 to apam on Wednesday, October 17, 2018 13:19:26
    On 10/17/18, apam said the following...
    Let me know if you need any changes made. If there are data paths that
    are hard coded (ie not changable via bbs.ini - there may well be) it's a bug and should be fixed.

    OK will do. :)

    ...deon

    --- Mystic BBS v1.12 A39 2018/04/21 (Raspberry Pi/32)
    * Origin: Chinwag | MysticBBS in Docker on a Pi! (21:2/116.1)
  • From Deon George@21:2/116.1 to Vk3jed on Wednesday, October 17, 2018 13:37:24
    On 10/17/18, Vk3jed said the following...
    I also need to share why (I think) Docker is great - I pretty much ru everything under docker :)

    That would be a good idea. The question is "Why should we bother?" And go on, state your case. :)

    Yeah, the main reason I do so - is for "redundancy". Docker does make you consider your application deployment a little more seriously. But by doing
    so, it gives you options.

    The deployment considerations is that you separate out "application", "configuration" and "data" - you'll see me post around this a few times as I work out new packages (to me).

    "Applications" are static by nature. EG: Mystic 1.12A39 would be a "container" representing v1.12a39 of the Mystic application. (It doesnt change.)

    "Configuration" is "mostly" static by nature - but I may want to change/tweak parameters sometimes. EG: Enabling a "port", changing the path to my "data" etc...

    "Data" belongs to the version of an application. So everything in my data dir is for the version of the application in question. (EG: mystic-v1.12a39 currently) - so if I back it up, I know that anytime in the future, I can restore it, bring back container "Mystic 1.12a39" and it will run. (And if I set up the container properly - guaranteed!)

    What docker gives me most is options. Lets say I want to move my app from PC1 to PC2 - all I need to do is backup/restore (or clone) the data from PC1 to
    PC2 (normally in a single root directory) - pull the container onto PC2 and away I go. The network address may have changed, but everything else is the same "from inside the container". The container thinks it was "stopped" and then "started", but did not realise it physically moved hosts.

    Where it comes really useful is "upgrading". Lets say Mystic v2 comes out
    that runs an "update process" against v1.12a39 to make it v2. I can build the v2 container, then:

    1 Backup my data of the current (LVS snapshot is great here)
    2 Pull the v2 container and run the upgrade

    If things go pear shaped, I restore the data (or revert the snapshot), pull
    the old container, and happy days, continue on the old version as if nothing happened. (The container thinks it stopped and then started.)

    What's better, I can test all this out on the same host (or a different one), and not doing it to the live app, but a "test" container (where I have made a copy of my data dir) - and if it all works out, I can then make that test app "live" or do it again to my "live" data.

    That's just one of the reasons I like docker...

    ...deon

    --- Mystic BBS v1.12 A39 2018/04/21 (Raspberry Pi/32)
    * Origin: Chinwag | MysticBBS in Docker on a Pi! (21:2/116.1)
  • From Vk3jed@21:1/109 to Deon George on Thursday, October 18, 2018 12:11:00
    On 10-17-18 13:37, Deon George wrote to Vk3jed <=-

    What docker gives me most is options. Lets say I want to move my app
    from PC1 to PC2 - all I need to do is backup/restore (or clone) the
    data from PC1 to PC2 (normally in a single root directory) - pull the container onto PC2 and away I go. The network address may have changed, but everything else is the same "from inside the container". The
    container thinks it was "stopped" and then "started", but did not
    realise it physically moved hosts.

    That sounds pretty cool, actually. :) Yeah, I can see why you'd use Docker. Certainly worth a look. :)



    ... My computer has EMS... Won't you help?
    === MultiMail/Win v0.51
    --- SBBSecho 3.03-Linux
    * Origin: Freeway BBS Bendigo,Australia freeway.apana.org.au (21:1/109)
  • From Deon George@21:2/116.1 to Vk3jed on Thursday, October 18, 2018 02:22:48
    On 10/18/18, Vk3jed said the following...
    That sounds pretty cool, actually. :) Yeah, I can see why you'd use Docker. Certainly worth a look. :)

    I have 3 Pi's at home (2 x B and 1 x B+) - and at one time I had mystic "floating" between the Pi's.

    So if it was running on Pi1, and I pulled out the power, Mystic would restart on Pi2 or Pi3, depending on which one was "less busy". (No other changes/reconfigs necessary.) In fact sometimes I would have to go and "find" which one it was running on if I wanted to attach to the console.

    Setting up something like this becomes a little more complicated that the
    "home user" use case - because you need a persistent storage medium to be available on each "node" - I used glusterfs to do this - it works well on the Pi. (Although I needed to learn more about glusterfs to address performance.)

    (I've actually powered down Pi2 and Pi3 at the moment - so it floated back to Pi1.)

    ...deon

    _--_|\ | Deon George
    / \ | Chinwag BBS - A BBS on a PI in Docker!
    \_.__.*/ |
    V | Coming from the 'burbs of Melbourne, Australia

    --- Mystic BBS v1.12 A39 2018/04/21 (Raspberry Pi/32)
    * Origin: Chinwag | MysticBBS in Docker on a Pi! (21:2/116.1)
  • From Vk3jed@21:1/109 to Deon George on Thursday, October 18, 2018 20:54:00
    On 10-18-18 02:22, Deon George wrote to Vk3jed <=-

    On 10/18/18, Vk3jed said the following...
    That sounds pretty cool, actually. :) Yeah, I can see why you'd use Docker. Certainly worth a look. :)

    I have 3 Pi's at home (2 x B and 1 x B+) - and at one time I had mystic "floating" between the Pi's.

    Now that's pretty cool! :D

    So if it was running on Pi1, and I pulled out the power, Mystic would restart on Pi2 or Pi3, depending on which one was "less busy". (No
    other changes/reconfigs necessary.) In fact sometimes I would have to
    go and "find" which one it was running on if I wanted to attach to the console.

    That's pretty clever. :)

    Setting up something like this becomes a little more complicated that
    the "home user" use case - because you need a persistent storage medium
    to be available on each "node" - I used glusterfs to do this - it works well on the Pi. (Although I needed to learn more about glusterfs to address performance.)

    I did look at glusterfs, but couldn't get my head around it.


    ... I'm not tense, just terribly A*L*E*R*T.
    === MultiMail/Win v0.51
    --- SBBSecho 3.03-Linux
    * Origin: Freeway BBS Bendigo,Australia freeway.apana.org.au (21:1/109)
  • From Deon George@21:2/116.1 to Vk3jed on Thursday, October 18, 2018 10:31:26
    On 10/18/18, Vk3jed said the following...
    I did look at glusterfs, but couldn't get my head around it.

    Think of glusterfs as a replicated filesystem. IE: In my example, my files existing on all 3 Pi's (which is why I could power off 2 of them, and my BBS float back to Pi 1 and continue running - a copy of the data is there.)

    The side affect of gluster is that every I/O write is an I/O write x number
    of nodes across the network (and I had this all going over Wifi). And by definition, the filesystem doesnt need to be local, in which case every read and write is an I/O across the network x node of nodes.

    Now I think gluster can be used for high I/O workloads (if I recall
    correclty, it's roots started in HPC - High Performance Cluster computing) - but that requires some thought and consideration to implement - the defaults out of the box performed "OK" but I didnt explore if I could tweak it to improve it (eg: local writes with async network transfers - but then you have consistency concerns on the remote replicas).

    ...deon

    _--_|\ | Deon George
    / \ | Chinwag BBS - A BBS on a PI in Docker!
    \_.__.*/ |
    V | Coming from the 'burbs of Melbourne, Australia

    --- Mystic BBS v1.12 A39 2018/04/21 (Raspberry Pi/32)
    * Origin: Chinwag | MysticBBS in Docker on a Pi! (21:2/116.1)
  • From Vk3jed@21:1/109 to Deon George on Thursday, October 18, 2018 22:00:00
    On 10-18-18 10:31, Deon George wrote to Vk3jed <=-

    On 10/18/18, Vk3jed said the following...
    I did look at glusterfs, but couldn't get my head around it.

    Think of glusterfs as a replicated filesystem. IE: In my example, my
    files existing on all 3 Pi's (which is why I could power off 2 of them, and my BBS float back to Pi 1 and continue running - a copy of the data
    is there.)

    Yeah, I understood the concept, makes perfect sense, especially for something like the wandering container. :) My issues were more with implementation, which is something that can be an issue for me if the process isn't clearly explained, easily deduced by logic or known through experience. :(

    The side affect of gluster is that every I/O write is an I/O write x number of nodes across the network (and I had this all going over
    Wifi). And by definition, the filesystem doesnt need to be local, in
    which case every read and write is an I/O across the network x node of nodes.

    Yeah makes sense, the data has to get to the other nodes somehow.

    Now I think gluster can be used for high I/O workloads (if I recall correclty, it's roots started in HPC - High Performance Cluster
    computing) - but that requires some thought and consideration to
    implement - the defaults out of the box performed "OK" but I didnt
    explore if I could tweak it to improve it (eg: local writes with async network transfers - but then you have consistency concerns on the
    remote replicas).

    High performance systems probably use GigE as a minimum between storage nodes.


    ... Guests being stalked by zombies stay at Best Western.
    === MultiMail/Win v0.51
    --- SBBSecho 3.03-Linux
    * Origin: Freeway BBS Bendigo,Australia freeway.apana.org.au (21:1/109)